Reducing the Life Cycle Cost of Automatic Target Recognition Application

Navy STTR 25.A - N25A-T006
Naval Air Systems Command (NAVAIR)
Pre-release 12/4/24   Opens to accept proposals 1/8/25   Closes 2/5/25 12:00pm ET

N25A-T006 TITLE: Reducing the Life Cycle Cost of Automatic Target Recognition Application

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Integrated Sensing and Cyber;Sustainment;Trusted AI and Autonomy

OBJECTIVE: Utilize advances in 3D deep learning to dramatically lower the cost of developing and maintaining automatic target recognition template databases.

DESCRIPTION: Generating and/or collecting sufficient imagery to train neural-network-based automated ship recognition applications can be extremely costly. For classification purposes, the ships of the world can be adequately represented with approximately 1500 vessel types. Collecting sufficient imagery in situ for that many vessel types over a very wide range of viewing angles or building 3D models of that many vessels is not feasible from a cost and time perspective. Advances in 3D deep learning are revolutionizing computer vision and machine learning ability to synthesize views of complex scenes using only a limited number of input views. Approaches such as neural radiance fields (NeRF) [Ref 1] and Generative Query Networks [Ref 2] are capable of generating high-quality, photorealistic volumetric scenes from multiview input images by incorporating prior knowledge of the world to significantly reduce the input data requirements. Here we seek utilize these classes of techniques with minimal input imagery, potentially diverse in nature, to generate the 3D geometry of targets of interest such as combatant ships from which 2D projections of the target models from arbitrary views can be produced. These 2D dimensionally accurate projections can then be utilized to train radar and electro-optical/infra-red (EO/IR)-based neural-network target recognition applications or populate databases of classification templates for expert-system and/or hybrid artificial intelligence/machine learning (AI/ML) system automatic target recognition applications.

Consideration should be given to the nature and quantity of acceptable inputs. Candidate inputs include, but are not limited to, multiple view optical images or plan and profile view silhouettes. Limited physical dimension information such as vessel overall length from which the dimensions and physical location of other features on the vessel can be computed. Real-time, full-spectrum rendering is required. Comparing real-time sensor data to a target classification database in a tactical situation, for example, demands this. Inputs may require segmentation or other operations to make them suitable for processing. Once the nature of acceptable inputs are determined, the processing should be automated and include accuracy assessments using suitable additional information not included as input. The approach should be directly extendable to other target types such as aircraft. The goal is to make the entire process from input sourcing through 2D projections as efficient as possible in order to dramatically reduce the life cycle costs of these automatic target recognition applications.

PHASE I: Research, evaluate, and develop the overall system approach and architecture. Identify acceptable input requirements to produce the 3D representations and 2D projections. Consideration should be given to options for sourcing suitable inputs. The Phase I effort will include prototype plans to be developed under Phase II.

PHASE II: Develop the complete processing chain including steps required to prepare inputs. Automate the processing as much as is feasible. Work with the Navy to produce a limited template database and conduct a comprehensive evaluation it being used in existing maritime automatic target recognition applications.

PHASE III DUAL USE APPLICATIONS: Complete the automated processing capability and integrate as a means to train a target recognition application or to generate a feature database for a template-matching algorithm.

The 3D deep learning ability to synthesize views of complex scenes using only a limited number of input views can be utilized for a very wide range of computer vision applications.

REFERENCES:

1. Mindenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R. and Ng, R. "NeRF: Representing scenes as neural radiance fields for view synthesis (Version 2)." Cornell University, 2020. https://arxiv.org/pdf/2003.08934.pdf

2. Eslami, S. A.; Jimenez Rezende, D.; Besse, F.; Viola, F.; Morcos, A. S.; Garnelo, M.; Ruderman, A.; Rusu, A. A.; Danihelka, I.; Gregor, K.; Reichert, D. P., Buesing, L., Weber, T., Vinyals, O., Rosenbaum, D., Rabinowitz, N., King, H., Hillier, C., … & Hassabis, D. "Neural scene representation and rendering." Science, 360(6394), 2018, pp. 1204-1210. https://www.science.org/doi/10.1126/science.aar6170

KEYWORDS: 3D Learning; Neural Radiance Fields (NeRF); Automatic Target Recognition; Machine Learning; Template Matching; Maritime Situational Awareness

TPOC 1: Thomas Kreppel
(301) 481-619
Email: [email protected]

TPOC 2: David Bizup
(301) 757-734
Email: [email protected]


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 25.A STTR BAA. Please see the official DoD Topic website at www.dodsbirsttr.mil/submissions/solicitation-documents/active-solicitations for any updates.

The DoD issued its Navy 25.A STTR Topics pre-release on December 4, 2024 which opens to receive proposals on January 8, 2025, and closes February 5, 2025 (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (December 4, 2024, through January 7, 2025) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 8, 2025 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

DoD On-line Q&A System: After the pre-release period, until January 22, at 12:00 PM ET, proposers may submit written questions through the DoD On-line Topic Q&A at https://www.dodsbirsttr.mil/submissions/login/ by logging in and following instructions. In the Topic Q&A system, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

DoD Topics Search Tool: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

[ Return ]