Robust Maritime Target Recognition

Navy SBIR 23.2 - Topic N232-092
NAVAIR - Naval Air Systems Command
Pre-release 4/19/23   Opens to accept proposals 5/17/23   Closes 6/14/23 12:00pm ET    [ View Q&A ]

N232-092 TITLE: Robust Maritime Target Recognition

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

OBJECTIVE: Develop a robust, fully functional application from airborne electro-optics/infrared (EO/IR) imagery capable of automatically classifying combatant from non-combatant ships. The application should also be capable of target identification at a reduced range and passively compute range to target and Angle Off Bow (AOB) directly from the imagery.

DESCRIPTION: In recent years there have been a widespread embrace of a variety of deep learning techniques for automatic target recognition of ships using airborne EO/IR or radar systems. Generally, the approaches have failed to deliver robust and affordable solutions. Ship recognition requires significant examples to train the classifiers, but obtaining suitable training data is very time consuming, expensive, and impossible in many instances. These systems tend to work impressively when applied to the exact conditions to which they were trained. When faced with other conditions, even those only slightly different from those in the training data, they can react in unexpected ways. The introduction of techniques such as generative adversarial networks do begin to address this deficiency but not sufficiently in practice. A much more robust approach is a hybrid, knowledge-driven one combining an expert system utilizing template-based screeners with deep learning applied in a limited manner to elements of the classification stream where they can effectively and robustly contribute [Ref 1]. Template-based expert system classifiers have been successfully developed previously for inverse synthetic aperture radar images [Ref 2].

From a classification/identification perspective the application must provide a high probability of correct classification (> 90% threshold and > 95% objective) and identification (> 95% threshold and > 98% objective) for combatants of the world. For ships correctly classified, estimated range should be within 3% and AOB with 2. It is estimated that the three-dimensional template database will need to represent 1,000 to 2,000 vessels. Efficient and accurate rendering of the template database is a critical element to make this approach feasible.

Investigations should consider the performance of the application as a function of pixel counts on target and image quality (i.e., target/background contrast, sensor system modulation transfer function [MTF], and noise). Overall computational resources need to be estimated for a multiple layer screening process. The merging of this expert system with deep learning techniques should be considered and pursued if justified.

Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA) formerly Defense Security Service (DSS). The selected contractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.

PHASE I: Research, evaluate, and develop the overall classifier architecture. Utilizing open-source data set, develop a prototype classifier to be tested on a representative set of combatant vessels. Assess the merits of a hybrid classification approach. The Phase I effort will include prototype plans to be developed under Phase II.

PHASE II: Develop an implementation of the complete classification approach including automated techniques for template preparation. Implementation should also consider system weight and power (SWAP) since the processor will be integrated into an air vehicle. Using data sets provided by the Navy, conduct a comprehensive evaluation of classification, range, and AOB estimation performance.

Work in Phase II may become classified. Please see note in the Description paragraph.

PHASE III DUAL USE APPLICATIONS: Transition the developed technology to candidate platforms/sensors. Potential transition platforms include the MQ-8C Fire Scout, MQ-4C Triton, MQ-25A Stingray, P-8A Poseidon, and Future Vertical Lift. Potential commercial applications include land-based and airborne port surveillance.

REFERENCES:

  1. Marcus, G. (2020, February 17). The next decade in AI: Four steps toward robust artificial intelligence. Arxiv. https://arxiv.org/vc/arxiv/papers/2002/2002.06177v2.pdf
  2. Telephonics. (n.d.). Marine classification aid (MCA). Telephonics. Retrieved March 7,2022, from https://www.telephonics.com/uploads/standard/46045-TC-Maritime-Classification-Aid-Brochure.pdf

KEYWORDS: electro-optics/infrared; automatic target recognition; vessel classification; maritime surveillance; remote sensing; template matching


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 23.2 SBIR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates.

The DoD issued its Navy 23.2 SBIR Topics pre-release on April 19, 2023 which opens to receive proposals on May 17, 2023, and closes June 14, 2023 (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (April 19, 2023 through May 16, 2023) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on May 17, 2023 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

SITIS Q&A System: After the pre-release period, until May 31, (at 12:00 PM ET), proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/ by logging in and following instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

Topic Q & A

5/11/23  Q. Would it be acceptable to initially restrict the angle at which detection/classification is expected to occur? As in, initially only look at the side of the ship and when that is determined to meet objectives, extrapolate to other viewing angles.
   A. Restricting to broadside views even in phase I is too restrictive.
5/11/23  Q. Gimbal control can be critical to achieving high accuracy results. Zooming in particular can help in transitioning from a classification/recognition task to identification. At later stages of the program (Phase II/III) can we expect to have any gimbal/zooming control?
   A. You will provide your requirements to the sensor resource manager (or they might be a fixed rule set). ID is not anywhere as important a requirement as classification.
5/11/23  Q. Classification with high probability thresholds above 90% and 3% to 2% AOB error utilizing EO/IR can be considered state-of-the-art on non-real test scenarios. How strong are these requirements for a real system that is expected to represent 1000-2000 vessels? Would it be considered a failure if it works at less than 90% accuracy such as say 70% or 80%?
   A. For combatants, we believe that these are reasonable thresholds. Although we do not expect they would be demonstrated in phase I.
5/9/23  Q. The Phase I description mentions using an "open-source data set" to test the prototype classifier. Will the dataset be provided? Or is it our responsibility to procure our own test data?
   A. No datasets will be provided by the Government. Obtaining suitable vessel images is the responsibility of the small business conducting the research.
5/9/23  Q. Under the Phase II description, it states that "the processor will be integrated into an air vehicle." For unmanned aircraft, since all full motion video is streamed to Navy ships via GUNSS, is it acceptable to consider integration into GUNSS as an alternative to onboard UAS processing? (It is assumed that for manned aircraft, all processing should be onboard the aircraft.)
   A. No assume all processing on board.
5/1/23  Q. How should the application address potential false positives or negatives in classification and identification, and what are the acceptable thresholds for such errors?
   A. Ability to separate amongst similar combatants at the fine naval class is critical (i.e., frigate, destroyer and cruiser classes of ships). You are to seek maximum possible performance and no-classification possible is an acceptable outcome if the image quality is insufficient to make a confident classification call. Probability of correct classification metrics are desired. Generally a human operator will be the final adjuticator.
5/1/23  Q. In terms of pixel counts on target and image quality (e.g., target/background contrast, sensor system modulation transfer function [MTF], and noise), are there any minimum or optimal thresholds that the application should meet?
   A. It is for you to demonstrate the minimum threshold with your approach.
5/1/23  Q. Are there any preferred or recommended deep learning techniques, algorithms, or methods that should be explored or prioritized for this project?
   A. The techniques should not require training to be initially deployed. If training is required it can only be part of a hybrid approach.
5/1/23  Q. Are there any existing databases or resources for vessel information that should be used or considered for the development of the three-dimensional template database?
   A. No.
5/1/23  Q. Are there specific EO/IR sensor systems or platforms that the solution should be compatible with or optimized for?
   A. Must be robust across a range of sensors (e.g., Wescam MX-8 to Raytheon MTS-B)
5/1/23  Q. Is there a hardware and software component to this solution?
   A. Airborne EO/IR systems on group 3-5 UAS.

[ Return ]