N22A-T008 TITLE: Smart Image Recognition Sensor with Ultralow System Latency and Power Consumption
OUSD (R&E) MODERNIZATION PRIORITY: General Warfighting Requirements (GWR);Microelectronics;Quantum Science
TECHNOLOGY AREA(S): Electronics
OBJECTIVE: Develop a novel smart visual image recognition system that has intrinsic ultralow power consumption and system latency, and physics-based security and privacy.
DESCRIPTION: Image-based recognition in general requires a complicated technology stack, including lenses to form images, optical sensors for opto-to-electrical conversion, and computer chips to implement the necessary digital computation process. This process is serial in nature, and hence, is slow and burdened by high-power consumption. It can take as long as milliseconds, and require milliwatts of power supply, to process and recognize an image. The image that is digitized in a digital domain is also vulnerable to cyber-attacks, putting the users� security and privacy at risk. Furthermore, as the information content of images needs to be surveilled and reconnoitered, and continues to be more complex over time, the system will soon face great challenges in system bottleneck regarding energy efficiency, system latency, and security, as the existing digital technologies are based on digital computing, because of the required sequential analog-to-digital processing, analog sensing, and digital computing.
It is the focus of this STTR topic to explore a much more promising solution to mitigate the legacy digital image recognition latency and power consumption issues via processing visual data in the optical domain at the edge. This proposed technology shifts the paradigm of conventional digital image processing by using analog instead of digital computing, and thus can merge the analog sensing and computing into a single physical hardware. In this methodology, the original images do not need to be digitized into digital domain as an intermediate pre-processing step. Instead, incident light is directly processed by a physical medium. An example is image recognition [Ref 1], and signal processing [Ref 2], using physics of wave dynamics. For example, the smart image sensors [Ref 1] have judiciously designed internal structures made of air bubbles. These bubbles scatter the incident light to perform the deep-learning-based neuromorphic computing. Without any digital processing, this passive sensor can guide the optical field to different locations depending on the identity of the object. The visual information of the scene is never converted to a digitized image, and yet the object can be identified in this unique computation process. These novel image sensors are extremely energy efficient (a fraction of a micro Watt) because the computing is performed passively without active use of energy. Combined with photovoltaic cells, in theory, it can compute without any energy consumption, and a small amount of energy will be expended upon successful image recognition and an electronic signal needs to be delivered to the optical and digital domain interface. It is also extremely fast, and has extremely low latency, because the computing is done in the optical domain. The latency is determined by the propagation time of light in the device, which is on the order of no more than hundreds of nanoseconds. Therefore, its performance metrics in terms of energy consumption and latency are projected to exceed those of conventional digital image processing and recognition by up to at least six orders of magnitude (i.e., 100,000 times improvement). Furthermore, it has the embedded intrinsic physics-based security and privacy because the coherent properties of light are exploited for image recognition. When these standalone devices are connected to system networks, cyber hackers cannot gain access to original images because such images have never been created in the digital domain in the entire computation process. Hence, this low-energy, low-latency image sensor system is well suited for the application of 24/7 persistent target recognition surveillance system for any intended targets.
In summary, these novel image recognition sensors, which use the nature of wave physics to perform passive computing that exploits the coherent properties of light, is a game changer for image recognition in the future. They could improve target recognition and identification in degraded vision environment accompanied by heavy rain, smoke, and fog. This smart image recognition sensor, coupled with analog computing capability, is an unparalleled alternative solution to traditional imaging sensor and digital computing systems, when ultralow power dissipation and system latency, and higher system security and reliability provided by analog domain, are the most critical key performance metrics of the system.
PHASE I: Develop, design, and demonstrate the feasibility of an image recognition device based on a structured optical medium. Proof of concept demonstration should reach over 90% accuracy for arbitrary monochrome images under both coherent and incoherent illumination. The computing time should be less than 10 �s. The throughput of the computing is over 100,000 pictures per second. The projected energy consumption is less than 1 mW. The Phase I effort will include prototype plans to be developed under Phase II.
PHASE II: Design image recognition devices for general images, including color images in the visible or multiband images in the near-infrared (near-IR). The accuracy should reach 90% for objects in ImageNet. The throughput reaches over 10 million pictures per second with computation time of 100 ns and with an energy consumption less than 0.1 mW. Experimentally demonstrate working prototype of devices to recognize barcodes, handwritten digits, and other general symbolic characters. The device size should be no larger than the current digital camera-based imaging system.
PHASE III DUAL USE APPLICATIONS: Fabricate, test, and finalize the technology based on the design and demonstration results developed during Phase II, and transition the technology with finalized specifications for DoD applications in the areas of persistent target recognition surveillance and image recognition in the future for improved target recognition and identification in degraded vision environment accompanied by heavy rain, smoke, and fog.
The commercial sector can also benefit from this crucial, game-changing technology development in the areas of high-speed image and facial recognition. Commercialize the hardware and the deep-learning-based image recognition sensor for law enforcement, marine navigation, commercial aviation enhanced vision, medical applications, and industrial manufacturing processing.
REFERENCES:
KEYWORDS: Image recognition; wave mechanics; low latency; passive computing; sensors; deep learning
** TOPIC NOTICE ** |
The Navy Topic above is an "unofficial" copy from the overall DoD 22.A STTR BAA. Please see the official DoD Topic website at rt.cto.mil/rtl-small-business-resources/sbir-sttr/ for any updates. The DoD issued its 22.A STTR BAA pre-release on December 1, 2021, which opens to receive proposals on January 12, 2022, and closes February 10, 2022 (12:00pm est). Direct Contact with Topic Authors: During the pre-release period (December 1, 2021 thru January 11, 2022) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 12, 2022 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period. SITIS Q&A System: After the pre-release period, proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/, login and follow instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing. Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.
|