Centralized Automated Fault Monitoring

Navy SBIR 23.1 - Topic N231-056
NAVSEA - Naval Sea Systems Command
Pre-release 1/11/23   Opens to accept proposals 2/08/23   Closes 3/08/23 12:00pm ET

N231-056 TITLE: DIGITAL ENGINEERING - Intelligent Capture of Digital Imaging for Systems Engineering, Modeling, and Training

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Artificial Intelligence (AI)/Machine Learning (ML); General Warfighting Requirements (GWR)

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

OBJECTIVE: Develop a high-capacity digital video imagery recording system that provides intelligent selection and efficient organization and storage.

DESCRIPTION: Electro-optic and infrared (EO/IR) imaging sensors (cameras) are widely used for situational awareness, surveillance, and targeting. However, no matter the application, the amount of raw data generated is enormous. This is especially true for large format, high resolution, and extremely high frame rate video sensors. The ability of human operators to monitor, comprehend, and respond to such video streams is therefore taxed, especially over extended periods and during stressful engagements. Therefore, these systems incorporate image processing features that aid the human operator. For some systems, this may be as simple as displaying alerts that guide the operatorís attention to specific regions of interest (ROIs) based on predefined thresholds or characteristics. For increasingly sophisticated systems, the operator will be aided by a suite of machine-learning (ML) enabled image processing algorithms that accurately recognize ROIs, identify targets of interest, and generate alerts that take into consideration conditions that are completely new and unknown to the particular operator. To do this efficiently and accurately, these algorithms must be trained on the most extensive and relevant set of digital video image data possible. By necessity, this data must be continuously updated to reflect new conditions and evolving threats. While commercial image data recorders are readily available, "smart" recorders, where every piece of recorded data is necessary, valuable, and readily useful, do not exist Ė especially with the capacity required for operational use.

A co-requisite need arises from the systems engineering process as applied to the development, test, evaluation, validation, and sustainment of EO/IR imaging sensor systems. Requirements for system performance naturally define parameters such as resolution, dynamic range, and spectral bandwidth. But these specifications, though essential, donít fully capture the ability of a system to help a human operator discern a target in a complex and rapidly changing maritime seascape subject to the caprices of wave and weather. Effective system development and validation, especially for those systems incorporating sophisticated image processing and decision aids, depends on evaluating performance against a wide range of imaging conditions. However, this cannot be done exclusively at outdoor test ranges (which are expensive). A great deal of system development, particularly the design of the image processing subsystem, must be done using captured imagery. In addition, a library of representative video images provides a digital "standard" against which system performance can be defined and evaluated throughout the systems engineering process, starting from requirements definition and system modeling. This is especially true as acquisition programs move to a full model-based systems engineering (MBSE) approach. If properly executed, digital models, based on properly updated and validated data, can be used throughout a systemís life, from research and development through operation and sustainment. Finally, a library of imaging data is essential for training personnel and training ML algorithms.

In both cases, the key is a readily accessible library of captured image data, broad in extent, and organized and stored for ready retrieval in a way conducive to the particular task at hand. The Navy, therefore, needs an intelligent recording system (recorder) that captures, sorts, compresses, and stores video image data. The "intelligent" aspect of the solution means that the recorder should operate autonomously, without operator input needed to initiate recording (although the provision for operator/external-directed capture should be included with sufficient buffering to account for operator response time) and without manual curation of the stored video. For simple systems, where ambient conditions can be controlled and the overall scene does not change appreciably, this can be accomplished through simple motion detection. However, for a warship at sea, the scene is constantly evolving such that economical use of storage capacity demands a conditional recording strategy that is highly selective and dynamic.

In deciding what video to capture and how it is sorted and compressed for storage, the recorder should utilize metadata embedded within the video file. This includes not just the metadata provided organically from the imaging sensor (camera) but also metadata generated by image processing subsystems. In particular, the metadata will be augmented by ML-enabled subsystems that, for example, identify ROIs within the broader image frames. The recorder may also incorporate ML-based algorithms in the decision process if those algorithms facilitate accurate identification of video of interest. Video of interest can include both typical events and atypical events and may, in the limiting case, be captured as a still image photo (SIP). Beyond this, the solution is expected to develop the methodology for identifying which video samples warrant recording. It should also be noted that the metadata associated with each video sample is not fixed, and as image processing subsystems increase in sophistication and system hardware and software is updated, the available metadata may well expand. Therefore, acceptable solutions must be extensible. Likewise, the solution should be agnostic to sensor format, frame rate, resolution, etc., and allow recording of non-compressed Class 0 motion imagery and compressed inputs. The video will be compliant with (and therefore the solution must be compliant with) MIL-STD-2500C National Imagery Transmission Format Standard, Motion Imagery Standards Profile (MISP), Motion Imagery Standards Board (MISB) Standard (ST) 1606, MISB ST 1608, MISB ST 1801, MISB ST 0902, and MISB ST 1402.

The recording system should include a loop recording component, an intelligent image processor, and a non-volatile long-term storage component with removable media. The loop recording component will receive synchronized imagery, audio, and augmented metadata from the EO/IR sensor (and the sensor image processing subsystems) in native resolution and framerate for a minimum of two hours (with a goal of 24 hours). The solution will select, capture, and sort video or SIP samples for compression, organization, and storage. The loop recorder and storage component, though distinct and separable, shall be designed for performance as a complete system. The intelligent image processor may form a distinct component or be distributed between the loop recorder and storage components as necessary.

Examples of events that would trigger automatic recording include suboptimal operation of an artificial intelligence/machine learning (AI/ML) subsystem, sensor degradation or failure, incidents at sea, detection of targets of interests or threats, and engagements. However, the solution will also include a capability for the operator to choose direct recording manually. The volatile storage component will automatically store and curate video and SIP samples with the corresponding metadata. In addition to automatic content curation, the storage component shall use the available storage efficiently to maximize the recording capacity with minimal operator input. Solutions may utilize adaptive compression, foveated imaging, bandwidth prioritization techniques, or other methods to control the quality of video compression so that semantically meaningful elements of the scene are encoded with the highest fidelity, while background elements are allocated fewer bits in the transmitted representation. The amount of compression applied to ROIs must be variable, ranging from no compression at all to full compression as applied to the entire image. The amount of compression applied to ROIs will be determined by presets, cues from the image processing subsystem, or dynamically determined by the recorder based on the metadata available. The efficiency of the storage component will be based on the size of the stored content compared to the original (uncompressed) video and SIP content selected for recording. The impact of image compression on ROIs should be minimal as determined by analysis with an image quality evaluator. The image quality evaluator shall be proposed as part of the solution.

The sensor subsystems that will provide the augmented metadata are currently in development and may include autonomous detection and tracking, aided target recognition, image fusion, turbulence mitigation, and super-resolution. These sensor subsystems are not considered part of the intelligent recorder system. For maximum utility, the intelligent recorder shall function in two distinct modes. The first mode is the operation of the loop recording component in real-time in conjunction with the intelligent image processor and long-term storage component. In this mode, the loop recorder acts as an interface and buffer (if buffering is needed). Alternately, the loop recorder shall operate separately and collect imagery for later ingestion by the intelligent image processor either by intermittent connection to the loop recorder or by physical transfer of removable storage media. In this way, one intelligent recorder system can service multiple imaging sensors (with multiple loop recorders). By extension, the intelligent recorder system should be compatible with previously recorded imagery data from other sources. Examples include image data from historical collections and imagery from systems that already include their own recording system. In this latter case, it is understood that the performance of the intelligent recorder system may not be optimal.

Prototypes developed under this effort are not intended for deployment and need not be hardened according to environmental shipboard standards. Size, weight, and power (SWaP) of the prototype are not constrained; however, the prototype is intended for benchtop use. The solution should be fundamentally scalable, but the prototype should include and demonstrate the capability to handle video from four input channels simultaneously. The Government cannot guarantee that recorded data or image processing algorithms can be made available, so the proposed approach should anticipate the need to capture imagery from representative sensors (visible and infrared) and process the data through simple algorithms that emulate the sensor image processing subsystems through the generation of additional metadata. In this way, the solution and the imaging data used to demonstrate it should remain unclassified. However, the solution should include no constraint or feature that precludes its use on classified systems. For example, the loop recorder and storage components should allow for future encryption of the storage media. Proprietary video or SIP file formats shall not be used.

Two prototype intelligent recording systems shall be delivered under the effort along with peripheral hardware and software necessary to offload the captured data, access it, examine it, and prepare it for permanent transfer and storage on a Government-owned network. User interface software shall also be delivered that enables efficient management of the system. This includes manual (operator directed) recording and SIP capture, search (time, location, source, keywords, and other metadata elements), and playback, and any editing tools, compression tools, and tools needed to manage settings (directly or remotely), file formats, and organization of the image library. At the conclusion of the effort, prototypes and peripherals will be delivered to Naval Surface Warfare Center (NSWC), Crane Division, Crane, Indiana.

Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. Owned and Operated with no Foreign Influence as defined by DOD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence Security Agency (DCSA), formerly the Defense Security Service (DSS). The selected contractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances, in order to perform on advanced phases of this contract as set forth by DSS and NAVSEA in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advance phases of this contract.

All DoD Information Systems (IS) and Platform Information Technology (PIT) systems will be categorized in accordance with Committee on National Security Systems Instruction (CNSSI) 1253, implemented using a corresponding set of security controls from National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53, and evaluated using assessment procedures from NIST SP 800-53A and DoD-specific (KS) (Information Assurance Technical Authority (IATA) Standards and Tools).

The Contractor shall support the Assessment and Authorization (A&A) of the system. The Contractor shall support the governmentís efforts to obtain an Authorization to Operate (ATO) in accordance with DoDI 8500.01 Cybersecurity, DoDI 8510.01 Risk Management Framework (RMF) for DoD Information Technology (IT), NIST SP 800-53, NAVSEA 9400.2-M (October 2016), and business rules set by the NAVSEA Echelon II and the Functional Authorizing Official (FAO). The Contractor shall design the tool to their proposed RMF Security Controls necessary to obtain A&A. The Contractor shall provide technical support and design material for RMF assessment and authorization in accordance with NAVSEA Instruction 9400.2-M by delivering OQE and documentation to support assessment and authorization package development.

Contractor Information Systems Security Requirements. The Contractor shall implement the security requirements set forth in the clause entitled DFARS 252.204-7012, "Safeguarding Covered Defense Information and Cyber Incident Reporting," and National Institute of Standards and Technology (NIST) Special Publication 800-171.

PHASE I: Develop a concept for an intelligent recorder system that meets the objectives stated in the Description. Define the video event identification methodology that triggers recording and demonstrate the feasibility of the concept in meeting the Navyís need. Analyze the effect on image quality and recorder performance from the techniques proposed for image compression and storage. Feasibility shall be demonstrated by a combination of analysis, modeling, and simulation as stated in the Description. The Phase I Option, if exercised, will include the initial design specifications and capabilities description to build a prototype solution in Phase II.

PHASE II: Develop and demonstrate a prototype intelligent recorder system based on the results of Phase I. Demonstration of the intelligent recorder system shall be accomplished through production and test of a prototype in a representative (but protected) environment (for example, from a pier) or by use of raw collected imagery data and laboratory demonstration. At the conclusion of Phase II, two final prototypes along with the peripheral equipment and software necessary for their operation shall be delivered to NSWC Crane along with complete test data, updated specifications, interface documents, capabilities description, and sample image files (video and SIP) recorded by the prototypes.

It is probable that the work under this effort will be classified under Phase II (see Description section for details).

PHASE III DUAL USE APPLICATIONS: Support the Navy in transitioning the technology for Government use. Develop deployable designs with specific interface and storage requirements. Harden the designs for shipboard use. Establish hardware and software configuration baselines, produce production-level documentation, and transition the intelligent recorder into production. Assist the Government in the integration of the intelligent recorder with deployed imaging sensor systems.

The technology resulting from this effort is anticipated to have broad military application. In addition, there are numerous law enforcement and security applications. Scientific applications include the recording of natural events such as wildlife behavior, weather, and astronomical observations.

REFERENCES:

1.       Jain, Akshat, et al. "Smart surveillance monitoring system." 2017 International Conference on Data Management, Analytics and Innovation (ICDMAI). IEEE, 2017. https://ieeexplore.ieee.org/abstract/document/8073523/

2.       Shao, Zhenfeng, et al. "Smart monitoring cameras driven intelligent processing to big surveillance video data." IEEE Transactions on Big Data 4.1 (2017): 105-116. https://ieeexplore.ieee.org/abstract/document/7949067/

3.       Sengar, Sandeep Singh, and Mukhopadhyay, Susanta. "Motion segmentation-based surveillance video compression using adaptive particle swarm optimization." Neural Computing and Applications (2019): 1-15. https://link.springer.com/article/10.1007/s00521-019-04635-6

4.       Bagdanov, Andrew D., et al. "Adaptive video compression for video surveillance applications." 2011 IEEE International Symposium on Multimedia. IEEE, 2011. https://ieeexplore.ieee.org/abstract/document/6123345/

 

KEYWORDS: Intelligent Recorder; Machine Learning; Image Processing; Imaging Sensor; Video; Metadata

TPOC-1: Marcin Malec

Phone: (812) 854-8327 Y

Email: [email protected] Y

 

TPOC-2: Amy Zumbrun 

Phone: (812) 854-3041 Y

Email: [email protected] Y


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 23.1 SBIR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates.

The DoD issued its Navy 23.1 SBIR Topics pre-release on January 11, 2023 which opens to receive proposals on February 8, 2023, and closes March 8, 2023 (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (January 11, 2023 thru February 7, 2023) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on February 8, 2023 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

SITIS Q&A System: After the pre-release period, and until February 22, 2023, (at 12:00 PM ET), proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/, login and follow instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

[ Return ]