Implementing Neural Network Algorithms on Neuromorphic Processors

Navy SBIR 20.2 - Topic N202-099

Naval Air Systems Command (NAVAIR) - Ms. Donna Attick [email protected]

Opens: June 3, 2020 - Closes: July 2, 2020 (12:00 pm ET)



N202-099       TITLE: Implementing Neural Network Algorithms on Neuromorphic Processors


RT&L FOCUS AREA(S): Artificial Intelligence/ Machine Learning, General Warfighting Requirements (GWR)



OBJECTIVE: Deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware.


DESCRIPTION: Biological inspired Neural Networks provide the basis for modern signal processing and classification algorithms. Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological Neuron/Synapse model found in Nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated significant performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].


The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM), where information is spatially and temporally encoded onto the network. A simple spiking network can reproduce the complex behavior found in the Neural Cortex with significant reduction in complexity and power requirements [Ref 2]. Fundamentally, there should be no difference between algorithms based on Neural Network and current processing hardware. In fact, the algorithms can easily be transferred between hardware architectures [Ref 4]. The performance gains, application of neural networks and the relative ease of transitioning current algorithms over to the new hardware motivates the consideration of this topic.�

Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.


Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.


PHASE I: Develop an approach for deploying Neural Network algorithms and identify suitable hardware, learning algorithm framework and benchmark testing and validation methodology plan. Demonstrate performance enhancements and integration of technology as described in the description above. The Phase I effort will include plans to be developed under Phase II.


PHASE II: Transfer government furnished algorithms and training data running on a desktop computing environment to the new hardware environment. An example algorithm development frame for this work would be TensorFlow. Some modification of the framework and/or algorithms may be required to facilitate transfer. Some optimization will be required and is expected to maximize the performance of the algorithms on the new hardware. This optimization should focus on throughput, latency, and power draw/dissipation. Benchmark testing should be conducted against these metrics. Develop a transition plan for Phase III.


It is probable that the work under this effort will be classified under Phase II (see Description section for details).


PHASE III DUAL USE APPLICATIONS: Optimize algorithm and conduct benchmark testing. Adjust algorithms as needed and transition to final hardware environment. Successful technology development could benefit industries that conduct data mining and high-end processing, computer modeling and machine learning such as manufacturing, automotive, and aerospace industries.



1. Ambrogio, S., Narayanan, P., Tsai, H., Shelby, R., Boybat, I., Nolfo, C., . . . Burr, G. �Equivalent-Accuracy Accelerated Neural-Network Training Using Analogue Memory.� Nature, June 6, 2018, pp. 60-67.


2. Izhikevich, E. �Simple Model of Spiking Neurons.� IEEE Transactions on Neural Networks, 2003, pp. 1569-1572.


3. Diehl, P., Zarrella, G., Cassidy, A., Pedroni, B. & Neftci, E. �Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-Power Neuromorphic Hardware.� Cornell University, 2016.


4. Esser, S., Merolla, P., Arthur, J., Cassidy, A., Appuswamy, R., Andreopoulos, A., . . . Modha, D. �Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing.� IBM Research: Almaden, May 24, 2016.


5. Department of Defense. National Defense Strategy 2018. United States Congress.


KEYWORDS: Neural Networks, Neuromorphic, Processor, Algorithm, Spiking Neurons, Machine Learning



The Navy Topic above is an "unofficial" copy from the overall DoD 20.2 SBIR BAA. Please see the official DoD DSIP Topic website at for any updates. The DoD issued its 20.2 SBIR BAA on May 6, 2020, which opens to receive proposals on June 3, 2020, and closes July 2, 2020 at 12:00 noon ET.

Direct Contact with Topic Authors. During the pre-release period (May 6 to June 2, 2020) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic.

Questions should be limited to specific information related to improving the understanding of a particular topic�s requirements. Proposing firms may not ask for advice or guidance on solution approach and you may not submit additional material to the topic author. If information provided during an exchange with the topic author is deemed necessary for proposal preparation, that information will be made available to all parties through SITIS (SBIR/STTR Interactive Topic Information System). After the pre-release period, questions must be asked through the SITIS on-line system as described below.

SITIS Q&A System. Once DoD begins accepting proposals on June 3, 2020 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period. However, proposers may submit written questions through SITIS at, login and follow instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

Topics Search Engine: Visit the DoD Topic Search Tool at to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 703-214-1333 or via email at [email protected]