Speedy UAV Swarms Detection, Identification, and Tracking using Deep Learning-Based Fusion Methodology for Radar and Infrared Imagers

Navy SBIR 24.1 - Topic N241-017
NAVAIR - Naval Air Systems Command
Pre-release 11/29/23   Opens to accept proposals 1/03/24   Now Closes 2/21/24 12:00pm ET    [ View Q&A ]

N241-017 TITLE: Speedy UAV Swarms Detection, Identification, and Tracking using Deep Learning-Based Fusion Methodology for Radar and Infrared Imagers

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Sustainment

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

OBJECTIVE: Develop and demonstrate an innovative deep learning-based fusion methodology for speedy radar and infrared cameras that can effectively detect, identify, and track unmanned aerial vehicle (UAV) swarm with high probability of detection and low probability of false alarms in the best overall track accuracy within the processing and time constraints availabl.

DESCRIPTION: The use of UAVs of various sizes and shapes is growing very rapidly for a wide variety of defense, commercial, and other applications. Along with their advantages in ease of operation and low cost, the widespread availability of UAVs has posed significant security threats in both defense and civilian arenas. As part of the effective UAV threat mitigations for the DoD, it is first necessary to be able to detect and identify UAVs in the airspace in a timely manner before UAV interdiction strategies can be executed arenas [Refs 1-3]. Straightforward adoption of currently fielded airspace surveillance technologies will not suffice as UAVs are much smaller in physical size and fly at lower altitudes. Furthermore, recent advances in UAV technology have enabled the capability of large swarms of UAVs flying together in either uncoordinated or coordinated groups. These UAV swarms with continuously growing swarm sizes pose an even more serious and emerging threats to the Navy forces, assets, and installations. These UAV swarms pose additional challenges to counter them over that posed by dealing with UAVs one or two at a time. In fact, there are reports that suggest that the Chinese military is strategizing attacking the U.S. aircraft carrier battle groups with swarms of multi-mission UAVs during a conventional naval conflict [Ref 4]. Many target detection and tracking systems rely on passive medium wavelength infrared/long wavelength infrared (MWIR/LWIR) thermal infrared cameras. LWIR camera in particular offers unique advantages and their performances are not adversely degraded from scattering by water-based aerosols, snow, rain, fog, and clouds in the atmosphere. However, it is still challenging to detect and identify small UAVs using IR imagers alone due to the low contrast in the thermal imagery, especially in environments cluttered with background noise. The various growing configurations of current and emergent UAVs and growing threats of UAV swarms, make a single optimized sensor solution in a timely fashion impractical and ineffective. The breadth of this UAV swarm threat is both wide in scope and deep in complexity. It therefore warrants a more capable solution tailored for different circumstances. To maintain a more robust situation awareness and to provide much improved protection for naval assets and forces against current and future UAV swarm threats, it is therefore logical to consider a speedy counter UAV swarms system comprising of a suite of multiple sensors with different modalities [Ref 4].

It is the objective of this SBIR topic to develop sensor fusion methodology from phased array radar system [Ref 5] and MWIR/LWIR infrared cameras to combine data from different and orthogonal modalities to generate inferences that would not be possible from a single sensor alone. To deal with a large swarm of targets, a phased array radar system can steer a narrow beam quickly to identify and target multiple targets in multiple directions simultaneously without having to physically move the system, as opposed to a spinning antenna from a legacy radar system. The target scope of data fusion from multiple sensors is to achieve much more accurate and faster results in UAVs detection, identification, and tracking than those derived from single sensors, while compensating for their individual weaknesses. Fusion of multiple sensors in a UAV swarms target acquisition and cueing system requires managing, interpreting, and analyzing a large set of heterogeneous input data. It is also expected that the sensors fusion and the detection/tracking algorithm would enable the detection, identification, and tracking of the UAV swarm in a matter of a few seconds instead of minutes. The low system latency of detection/identification/tracking of swarms of increasing UAV numbers is particularly critical in combat scenario to allow sufficient time to counter the threats. Recent advances in deep learning presents the ability to manage diverse and complex data. In particular, deep learning technique enables learning relationships between dissimilar input signals, such as the behavior pattern and relationship of the UAVs within a swarm. Multi-sensor learning is capable of understanding in detail real-world problems, as well as filling the missing or corrupted sensor data. Deep learning-based algorithm combined with multiple sensors for counter UAV swarm application has never been developed before for low-latency target detection and tracking. In order to exploit the advances in LWIR/MWIR imaging, radar systems, and deep learning, the Navy is seeking an innovative, game-changing approach in this application of deep learning on multi-sensor data fusion and exploitation system for accelerated UAV swarm detection, identification, and tracking.

Fusing tactical data from radar and infrared cameras poses significant challenges due to the diversity and complexity of the data, for example track count, accuracy, update rates, and uncertainty. The data sources often have different formats, resolutions, and phenomenologies, making it difficult to correlate the data accurately. Additionally, the tactical data can be affected by external factors such as weather, environmental conditions, and UAV swarm movements, which can lead to misinterpretation or misclassification of the data. Fusion algorithms must be able to handle these challenges and effectively fuse the tactical data to provide a reliable, real-time view of the UAVs swarm with the following performance:

First, the system must be able to ingest data from radar and infrared cameras in real time or near-real time, while maintaining data quality and consistency.

Second, the system must be able to normalize the data to ensure consistent data models, allowing for accurate correlation and temporal and spatial fusion of the radar and infrared camera sources.

Third, the system must be designed to handle the challenges associated with radar and infrared camera data, including different formats, resolutions, and phenomenologies. Due to differing collection footprints and inconsistent collection overlap of radar and infrared cameras processing their individual data and focusing on information level fusion (e.g., knowledge graph fusion) is acceptable.

Fourth, the system must be able to provide decision makers with a clear, accurate, and actionable view of the UAV swarm, improving classification confidence, and enabling effective decision-making with: (a) a probability of UAV swarm detection-to-track more than 90%; (b) a classification accuracy of more than 90% across the set of UAV swarms when only trained on simulated data; (c) a probability of false alarms less than 10% at the UAV swarm detection range up to 10 km; (d) common atmospheric obscurants reducing the visible transmission coefficient at UAV swarm detection distance down to less than 10% relative to that in vacuum; (e) the UAV swarm appearing a couple of pixels wide in a dim setting; (f) techniques to automatically analyze the data associated with UAV swarm tracks, hypothesize, and make ID classification over processed spatial resolutions of 2–70 cm/pixel; and (g) a deep-learning model and the algorithm having a provision that allows a growing library of current and future-generation UAVs.

PHASE I: Design, document, and demonstrate feasibility of a robust deep learning-based algorithm for a fusion system of radar and MWIR and LWIR cameras of the developer’s choice that meet or exceed the requirements specified in the Description. Identify the technical risk elements in the detection, identification, and tracking algorithm design for a UAV swarm of over 10 UAVs and provide viable risk mitigation strategies. Demonstrate the feasibility of the approach utilizing commercial off-the-shelf (COTS) computer for the algorithm to perform at a 5 Hz or higher solution rate.

The Phase I effort will include prototype plans to be developed under Phase II.

PHASE II: Develop, optimize, demonstrate, and deliver the fusion algorithms developed in Phase I for the selected system of radar, MWIR, and LWIR sensors for this project. The designs will then be modified as necessary to produce a final prototype. Work with the government team to test the algorithms against data collected from candidate sensors relevant to the Navy. Pertinent information will be provided to the awardee if necessary. Collect relevant training and testing data using contractor-provided UAV swarms of interest with at least 10 UAVs to validate their performance claims. Illustrate how the technology can be successfully expanded for detection, identification, and tracking of a UAV swarm of 20 to 50 UAVs against the aforementioned atmospheric and range conditions. Besides the algorithms, deliver all developed tools and data to the government.

Implement algorithm prototypes in a realistic environment that enables thorough testing of algorithms. Incorporate applications to support testing, for example, operator displays and decision support systems. Demonstrate and validate algorithm’s effectiveness. Deliver an algorithm description document, engineering code, and test cases. Explore and document other potential methodologies identified in Phase I.

PHASE III DUAL USE APPLICATIONS: Include upgrades to the analysis, M&S, and T&E results. Provide mature prototypes of radar and infrared fusion system to perform broad area search for UAV swarm in a single image.

Phase III goals are:

(a) super-resolution processing to produce a higher resolution (HR) image from a lower resolution (LR) one with increased image content and without inducing artifacts;

(b) training models focused upon consistent UAV swarm features such as spatial characteristics while giving low weight to variable characteristics like color and background;

(c) create and adapt training data to build generalized broad area search target detection, classification, and tracking UAV swarm models that perform well using radar and infrared camera imagery and video;

(d) collection of relevant radar and infrared camera data for training and testing machine learning models;

(e) interfacing with C-UAV offensive device;

(f) adapting processing capabilities for onboard edge device demonstration;

(g) adding additional algorithms that optimize usage of the radar, infrared camera fusion system to operate without user input; and pursue civilian applications and additional commercialization opportunities, for example, enhanced surveillance for homeland/boarder security, identification of camouflaged/hidden targets, and nighttime facial recognition.

Regarding commercialization, a potential commercial venue/application could be the commercial maritime market, where improved neighborhood awareness can increase operational safety in the oceans. In addition, this work could be applied to track anomaly detection in other domains, including air traffic management and Space Domain Awareness. The Space Domain Awareness is becoming more and more relevant, as proliferated constellations of satellites continue to grow, and it becomes more important to track debris and space objects. Commercial companies like Exo-analytics and Kratos provide space objects tracking as a service and could provide an avenue for commercialization

An additional commercialization opportunity is tracking software with increased performance for valuable application in the automotive industry, as many manufacturers are pursuing more automation and aim to fuse the information from a variety of data sources, including onboard cameras and radars.

REFERENCES:

  1. Pringle, C. (2019). US Marines to Test Drone-Killing Laser Weapons. Defense News. https://www.defensenews.com/industry/techwatch/2019/06/19/us-marines-to-test-drone-killing-laser-weapon/
  2. Williams, R. (2015). Tokyo Police are Using Drones with Nets to Catch Other Drones. The Telegraph. https://www.telegraph.co.uk/technology/2016/01/21/tokyo-police-are-using-drones-with-nets-to-catch-other-drones/
  3. Liptak, A. (2017). A US Ally Shot Down a $200 Drone with a $3 Million Patriot Missile. The Verge. https://www.theverge.com/2017/3/16/14944256/patriot-missile-shot-down-consumer-drone-us-military
  4. Koebler, J. (2013, March 14). Report: Chinese drone ‘swarms’ designed to attack American aircraft carriers. U. S. News & World Report. March 18, 2021, from https://www.usnews.com/news/articles/2013/03/14/report-chinese-drone-swarms-designed-to-attack-american-aircraft-carriers
  5. Chavez-Garcia, R. O., & Aycard, O. (2015). Multiple sensor fusion and classification for moving object detection and tracking. IEEE Transactions on Intelligent Transportation Systems, 17(2), 525-534. https://doi.org/10.1109/TITS.2015.2479925
  6. van Keuk, G., & Blackman, S. S. (1993). On phased-array radar tracking and parameter control. IEEE Transactions on aerospace and electronic systems, 29(1), 186-194. HTTPS://doi.org/10.1109/7.249124

KEYWORDS: Multi-Sensor Fusion; radar; camera; algorithm; deep learning; track accuracy


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 24.1 SBIR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates.

The DoD issued its Navy 24.1 SBIR Topics pre-release on November 28, 2023 which opens to receive proposals on January 3, 2024, and now closes February 21, (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (November 28, 2023 through January 2, 2024) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 3, 2024 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

SITIS Q&A System: After the pre-release period, until January 24, 2023, at 12:00 PM ET, proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/ by logging in and following instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

Topic Q & A

11/17/24  Q. Are the radar and IR camera situated on a ship or on a drone?
Also, are they co-located on the same vehicle? or are they distributed across multiple vehicles?
   A. Within your design trade space you may consider placing the radar and IR camera in any location(s) as long as you are compliant SBIR N241-017 requirements, “First, the system must be able to ingest data from radar and infrared cameras in real time or near-real time, while maintaining data quality and consistency.” and “Fourth, the system must be able to provide decision makers with a clear, accurate, and actionable view of the UAV swarm, improving classification confidence……”
1/15/24  Q. 1. There are a number of references to "classification" in the SBIR topic. For example there is a mention of:
a classification accuracy of more than 90% across the set of UAV swarms when only trained on simulated data;
Fourth, the system must be able to provide decision makers with a clear, accurate, and actionable view of the UAV swarm, improving classification confidence,
Can you please clarify as to what classification problem you want this topic to address? Are we to classify the UAVs? If so, how many classes? based on what?

2. The BAA mentions
(d) common atmospheric obscurants reducing the visible transmission coefficient at UAV swarm detection distance down to less than 10% relative to that in vacuum; (e) the UAV swarm appearing a couple of pixels wide in a dim setting;
Are you intending for the proposers to consider visible light camera as a sensor?
   A. (1) Your classification methodology should consider the following:
The “Complete” Swarm:

1. The Complete Swarm (Centroid/Extent): In many use cases, it may not be necessary to classify all the entities in a swarm but instead to view the swarm as a coherent entity. For such use cases, it may be adequate to classify the centroid of the swarm and to estimate its perimeter or boundary.
2. Extended Target Classification View: In the classical target classification problem, it is assumed that at most a single measurement is received from any sensor for all “point” targets at each time step (called the Mutual Exclusion criterion). Extended objects may give rise to more than one detection per opportunity where the scattering centers may vary from scan to scan; this is a result of high resolution sensing. On the other end, group targets (i. e., several closely spaced targets moving in a coordinated fashion) often will not cause as many detections as there are individual targets in the group due to limited sensor resolution capabilities. In both cases, classifying and data association processes under the Mutual Exclusion criterion are no longer applicable, and the target classification is treated as “extended”.
3. The Large-numbers View: Where the number of targets in a swarm is large and there is a need to classify all individual targets in a swarm.
4. The Emergent Behavior view: Depending on the swarm control methods but also dependent on swarm dynamics during a use case, especially for defense use cases, swarms may exhibit some type of emergent behavior, which in fact could be intentional. Classification methods would then be needed to detect emergence and to deal with the changing kinematics.

The Swarm Comprised of Groups:
1. The Grouped-Targets View: It is likely that the UASs of a swarm are employed to carry out a multiplicity of goals. In such cases, the swarm will break up into a set of groups, each having a sub-goal to achieve, and the classification requirement then shifts to classifying of such interrelated groups.
2. The Closely-Spaced-Object (CSO) View: In a variety of multi-object classifying problems, the objects can be close to each other, raising two resolution concerns: measurement-level resolution and track-level resolution; it can be expected that tight inter-sUAS spacing for swarm operations can very likely raise these challenges to classifying algorhms.
3. The Interacting Objects View: Swarm classification involving leader follower type behavior.

The Individual Elements of a Swarm:
1. Abrupt, Irregular Motion: The Light weights of sUAV’s and their varied propulsion systems, along with susceptibilities to wind, result in abrupt, irregular motions in their flight behaviors, resulting in the limited applicability of many traditional classification methods that depend on inertial effects.
2. The Signal Complexity View: Due to their small size and signal-affecting materials, and other signal affecting factors, surveillance sensor data for sUAV’s will be of low signal to noise (SNR) levels. Confusion of sUAV’s with natural clutter and birds is another complicating factor. This generates an interest in classifying “dim”, low SNR objects.
(2) SBIR N241-017 Phase 1 Tasks: - You are to design, document and demonstrate feasibility of a robust deep learning-based algorithm for a fusion system of radar and MWIR and LWIR cameras of the developer’s choice that meet or exceed the requirements specified in the Description.
1/12/24  Q. In the BAA you mention the use of phased array radars
To deal with a large swarm of targets, a phased array radar system can steer a narrow beam quickly to identify and target multiple targets in multiple directions simultaneously without having to physically move the system, as opposed to a spinning antenna from a legacy radar system.

Do you expect a solution to be provided in this topic to deal with the beamforming aspects of faced radar as well as Fusion of radar and infrared? In other words are beam forming algorithms part of the solution to be developed under this sbir topic?
   A. Within your design trade space you may consider decision-level and feature-level data in your sensor fusion algorithms which enables near-real-time, very low latency Automatic Target Acquisition (ATA)/Automatic Target Recognition (ATR).

[ Return ]