Real-time Computational Enhancement of Video Streams

Navy STTR 24.A - Topic N24A-T005
NAVAIR - Naval Air Systems Command
Pre-release 11/29/23   Opens to accept proposals 1/03/24   Now Closes 2/21/24 12:00pm ET    [ View Q&A ]

N24A-T005 TITLE: Real-time Computational Enhancement of Video Streams

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Sustainment; Trusted AI and Autonomy

OBJECTIVE: Develop and implement efficient computational algorithms for fast, low-latency deblurring, denoising, and super-resolution of video streams under dynamic conditions.

DESCRIPTION: Intelligence, surveillance, and reconnaissance (ISR) and automatic target acquisition (ATA) systems are continuously challenged to view at longer distances. Under long-range conditions, atmospheric disturbances, platform motion, and object motion often introduce blur and other artifacts in the imagery of video streams from ISR missions, limiting effectiveness. With the advance of computation hardware and algorithms, computational methods can be applied to video streams to passively enhance imagery in real time by removing blur and artifacts and in some cases providing super-resolution. These approaches have the potential to be applied to any video stream whether live or recorded. Blind deconvolution techniques are a promising approach [Refs 1-5]. This approach can provide automatic estimation of compensation parameters such as point spread function (PSF), which can then be applied to imagery for enhancement such as the increase in signal-to-noise ratio (SNR). Implementation on efficient and scalable single- or multi-GPU or other processors can ensure real-time operation with minimal latency.

The Navy requires a real-time computational algorithm and implementation for passive video enhancement of video streams under dynamic conditions (e.g., defective pixels, non-uniform backgrounds, clipped objects, dropped frames, abrupt scene changes, significant haze, strong glint, and saturated pixels). Sustained computation rates for imagery with > 1 mega-pixels per frame or more should be 30 Hz (threshold) and 60 Hz (target) with latency of 200 ms (threshold) and 50 ms (target). Power consumption should be less than 150 W (threshold) and 50 W (target) in a compact, reliable compute module. This low size, weight, and power (SWaP) enables integration on mobile platforms and other SWaP-constrained vehicles. The computational algorithm and implementation must be automatic, providing low-latency simultaneous deblurring (100% reduction in PSF width in some cases), denoising (50% increase in SNR in some cases), and in some cases super-resolution, contrast enhancement, and glint suppression of video. The algorithms should show that in some cases deblurring reduces PSF width by 100%, and denoising increases SNR by 100%. The algorithms and hardware should ideally be future-proof and scalable to a range of mission scenarios. Trade-offs in image quality, processing speed, and hardware SWaP should be documented. Minimally this system should include: (a) software framework to use real-time video enhancement for receiving and sending real-time video with network-based message passing protocols and recorded formats; (b) read and write selected image and video file formats; (c) supporting the simultaneous processing of multiple image streams; (d) multi-stream co-aligning (registering) technique for the estimation of arbitrarily large angles of rotation and the estimation of geometric scaling factors (i.e., zoom-in or zoom-out); (e) a Graphical User Interface (GUI) that displays compensated imagery in real time and enables operators with all levels of experience to easily use and configure the system; (f) techniques for mitigating the adverse effects of defects in the raw imagery and objects extending beyond the field of view; (g) techniques for automating the selection of near optimal compensation parameters; (h) performing online Multiframe Blind Deconvolution (MFBD) contrast and feature enhancement, and super-resolving imagery; (i) restoring and displaying turbulence-degraded imagery from live sensor feeds in real time; and (j) tuning of parameters and configuration for sustained, autonomous operation.

PHASE I: Develop a real-time video enhancement approach for tactical optical ISR systems imaging. Perform feasibility analysis of hardware and software for implementations of the system, including a study of types of video streams and under what conditions they can be enhanced and for what types of blurs, noise, and other artifacts. Develop an initial design specification for a prototype system to be fabricated and tested in Phase II. The Phase I effort will include prototype plans to be developed under Phase II.

PHASE II: Design, fabricate, and test the prototype system. Demonstrate performance and SWaP that meets the above specifications using Government furnished and/or synthetically generated datasets in either long distance air, land, and/or sea imaging applications as either part of a real-time image enhancement system or as an advanced pre-processing filter for image intelligence analyses.

PHASE III DUAL USE APPLICATIONS: Finalize software with appropriate SWAP-C and form factor based on human factors testing. Determine the best integration path as a capability upgrade to existing or future systems, including software and interfaces required to meet software interoperability protocols for integration into candidate systems as identified by the Navy.

Military Application: Surveillance, Technical Intelligence, Zoom Imaging Systems.

Commercial Application: Security and police surveillance attempting to identify threats and, Medical imaging procedures.

Transition of the STTR-developed products to both DoD and commercial markets, targeting applications where more compact and lighter weight hardware provides an order of magnitude improvement over current technology.

REFERENCES:

  1. Levin, A., Weiss, Y., Durand, F., & Freeman, W. T. (2009, June). Understanding and evaluating blind deconvolution algorithms. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1964-1971). IEEE. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5206815&casa_token=518seBTaEywAAAAA:IjrcPllDhwZY2p8_iLoIrdcLr3uIIVpsIsvlwlm3b63gMfVYLT18Gcq-w1giUZdU_S_m4wND&tag=1
  2. Wang, R., & Tao, D. (2014). Recent progress in image deblurring. University of Technology Sydney, 2014. arXiv preprint arXiv:1409.6838. https://arxiv.org/PSF/1409.6838.PSF
  3. Nasrollahi, K., & Moeslund, T. B. (2014). Super-resolution: a comprehensive survey. Machine vision and applications, 25(6), 1423-1468. https://www.proquest.com/docview/2262639045/30B577F3FB10454APQ/1?accountid=28165
  4. Archer, G. E., Bos, J. P., & Roggemann, M. C. (2014). Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging. Optical Engineering, 53(4), 043109. https://doi.org/10.1117/1.OE.53.4.043109
  5. Koh, J., Lee, J., & Yoon, S. (2021). Single-image deblurring with neural networks: A comparative survey. Computer Vision and Image Understanding, 203, 103134. https://doi.org/10.1016/j.cviu.2020.103134

KEYWORDS: Video; Imagery; Processing; Super-Resolution; Deblurring; Denoising; Turbulence


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 24.A STTR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates.

The DoD issued its Navy 24.A STTR Topics pre-release on November 28, 2023 which opens to receive proposals on January 3, 2024, and now closes February 21, (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (November 28, 2023 through January 2, 2024) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on January 3, 2024 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

SITIS Q&A System: After the pre-release period, until January 24, 2023, at 12:00 PM ET, proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/ by logging in and following instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

Topic Q & A

1/17/24  Q. Can you please clarify what performance target is intended by "100% reduction in PSF width".
Typically "100% reduction" would indicate a PSF width of zero. However, a "100% increase" often indicates double.
Could "100% reduction" mean to indicate "1/2 of original PSF width"? Thank you.
   A. STTR BAA - N24A-T005 Topic Q&A states, “The computational algorithm and implementation must be automatic, providing low-latency simultaneous deblurring (100% reduction in PSF width in some cases), denoising (50% increase in SNR in some cases), and in some cases super-resolution, contrast enhancement, and glint suppression of video. The algorithms should show that in some cases deblurring reduces PSF width by 100%, and denoising increases SNR by 100%.” With that said, the answer to your question is Yes, "100% reduction" does mean "1/2 of original PSF width”. See https://zeiss-campus.magnet.fsu.edu/articles/basics/psf.html for a more detailed explanation of where the full width at half-maximum (FWHM) is on a intensity distribution graph. FWHM is the distance between the points where the intensity is half of the maximum intensity. From that FWHM point on a intensity distribution graph is where the 100% reduction is to be measured and what is remaining is the upper half of the point spread function on the intensity distribution graph..
1/9/24  Q. Would you provide a few image examples or some short video sections for us to understand what kind of image datasets we need to handle?
   A. No. The Government will NOT provide you a few image examples or some short video sections for your use. You may propose and use your own video in Phase 1 to:
1). demonstrate your algorithm(s) on different types of video streams and under different video degraded conditions; and
2). demonstrate your algorithm(s) operating on your hardware and software operating system is able to enhance video for various types of video blurs, video noise, and other video artifacts.

[ Return ]