DIGITAL ENGINEERING - Gun Weapons Systems Synthetic Unmanned Aerial Systems Imagery Data Set

Navy SBIR 23.1 - Topic N231-037
NAVSEA - Naval Sea Systems Command
Pre-release 1/11/23   Opens to accept proposals 2/08/23   Closes 3/08/23 12:00pm ET

N231-037 TITLE: DIGITAL ENGINEERING - Gun Weapons Systems Synthetic Unmanned Aerial Systems Imagery Data Set

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Artificial Intelligence (AI)/Machine Learning (ML); Autonomy

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

OBJECTIVE: Develop a synthetic imagery dataset of Unmanned Aerial Systems (UAS) using machine learning (ML) for computer vision discriminator applications.

DESCRIPTION: Unmanned Aerial Systems (UAS) pose a threat to the US Navy (USN) surface fleet. Counter-UAS results in successful negation of UAS threats by USN effectors. It requires the ability to detect, identify, discriminate, and engage in a cost-effective manner. In order to increase the automation of surface sensors’ ability to detect, identify, and discriminate UAS, large data sets of image and video data must be collected. The number and variety of UAS and the need for all aspect coverage make physical data collections costly in terms of time and money. The USN seeks automated visual synthetic data generation using ML to develop these large data sets. The produced data sets will train algorithms. Synthetic data generation is a rapidly growing field. It is being applied to many different use cases including autonomous vehicle navigation and advanced driver-assistance systems as well as security systems and manufacturing automation. While these areas of research and development are newly advancing, specific use needed by the Government is not available. One particular technique that may be applicable is Deep Convolutional Generative Adversarial Networks (DC GANs) but other synthetic techniques are viable. The solution should provide data as seen at a nose-on view, top-down aerial view, and broad side view (i.e., plan, profile, and various oblique angles). The solution should demonstrate realism of the dataset through analysis and modeling.

The solution will contain a synthetic dataset of frame-by-frame UAS images (not video sequences) in both visible and thermal bands that is useful for application to the training, validation, and testing of ML and artificially intelligent sub-systems for Naval Gunnery Systems. The solution should produce a dataset that conforms to commonly available public standards and contains images and labels of ground truth objects in accordance with class ontology, such as the Jet Propulsion Laboratory Semantic Web for Earth and Environmental Terminology (SWEET). The dataset should contain at least 3 types of group 1 UAS and at least 2 types of group 2 UAS. These UAS used to create datasets may be commercial products. The synthetically generated data shall be photo-realistic for both the visible and thermal imagery with high definition (HD) resolution.

The labels should be at minimum rectangular labels, with segmentation labels being the objective. The dataset should also have diverse object and scene composition with variations in object size, orientation, background, lighting, and atmospheric conditions. The perspective and size of observation should also vary ranging between 2 pixels in the smallest dimension up to the full size of the image frames.

The dataset should also be appropriately partitioned by the band of synthetic imagery (visible and thermal). It should also follow image dataset convention in the split of training for the training of ML systems; validation for the initial testing of the algorithm performance; and test for model performance verification with distinct data from the other sets. Each band of data will contain these three sub-sets.

PHASE I: Develop a concept for automated synthetic generation dataset. Demonstrate its technical feasibility using analytical models, simulations, and testing. Modeling should demonstrate several produced image datasets in both the visible and thermal bands. The Phase I Option, if exercised, will include the initial design specifications and capabilities description to build a prototype solution in Phase II.

PHASE II: Develop and deliver a prototype automated synthetic generation dataset as described in the Description and based on the results of Phase I. Demonstrate that the prototype meets the parameters of the Description through initial laboratory testing to confirm the design, functionality, and modelling underlying the theory of automated synthetic generation to evaluate and assess the sufficiency of the synthetic dataset. The prototype dataset will be provided to the Government for testing in a digital format using common file formats.

PHASE III DUAL USE APPLICATIONS: Support the Navy in transitioning the automated synthetic generation technology to Navy use through testing and further development to facilitate the adaptation of the technology to Navy use in Naval Gunnery applications. The prototype will provide the foundation upon which to train, validate, and test UAS detection systems.

The product itself may have limited applications in the commercial sector. However, the tools and process developed to create this dataset will be extremely valuable for the creation of additional datasets for commercial applications. These include autonomous vehicle automation, security systems, and manufacturing automation.

REFERENCES:

1.       Strickland, Eliza. "Are You Still Using Real Data to Train Your AI?" IEEE Spectrum, February 17, 2022. https://spectrum.ieee.org/synthetic-data-ai

2.       Yalcin, Orhan. "Image Generation in 10 Minutes with Generative Adversarial Networks" Deep Learning Case Studies, September 17, 2020. https://towardsdatascience.com/image-generation-in-10-minutes-with-generative-adversarial-networks-c2afc56bfa3b

3.       Goodfellow, Ian. "NIPS 2016 Tutorial: Generative Adversarial Networks" arXiv:1701.00160v4. April 3, 2017 https://arxiv.org/pdf/1701.00160

4.       Samadzadegan, F.; Dadrass Javan, F.; Ashtari Mahini, F.;Gholamshahi, M. Detection and Recognition of Drones Based on a Deep Convolutional Neural Network Using Visible Imagery. Aerospace 2022, 9, 31. https://www.mdpi.com/2226-4310/9/1/31/pdf

5.       Brock, Andrew. "LARGE SCALE GAN TRAINING FOR HIGH FIDELITY NATURAL IMAGE SYNTHESIS" ICLR September 28, 2018 https://arxiv.org/pdf/1809.11096

 

KEYWORDS: Deep Convolutional Generative Adversarial Networks; Synthetic Data Generation; Synthetic Dataset; Unmanned Aerial Systems Imagery; UAS; Counter-UAS; Artificial Intelligence for visual image processing

TPOC-1: Benjamin Goldman

Phone: (540) 623-5099

Email: [email protected]

 

TPOC-2: Jess Riggle 

Phone: (540) 653-2107

Email: [email protected]


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 23.1 SBIR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates.

The DoD issued its Navy 23.1 SBIR Topics pre-release on January 11, 2023 which opens to receive proposals on February 8, 2023, and closes March 8, 2023 (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (January 11, 2023 thru February 7, 2023) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on February 8, 2023 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

SITIS Q&A System: After the pre-release period, and until February 22, 2023, (at 12:00 PM ET), proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/, login and follow instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

[ Return ]