Attention-Based Vision for Autonomous Vehicles
Navy SBIR FY2014.1
Sol No.: |
Navy SBIR FY2014.1 |
Topic No.: |
N141-076 |
Topic Title: |
Attention-Based Vision for Autonomous Vehicles |
Proposal No.: |
N141-076-0801 |
Firm: |
Novateur Research Solutions LLC 20452 Scioto Terrace
Ashburn, Virginia 20147 |
Contact: |
David Tolliver |
Phone: |
(412) 983-3558 |
Web Site: |
http://www.novateurresearch.com/ |
Abstract: |
This SBIR Phase I project will demonstrate the feasibility and effectiveness of novel biologically-inspired visual attention models for on-board exploitation of sensor data streams to enable autonomous missions in complex unknown environments. The key innovation in this effort is a principled and biologically plausible computational model for visual attention that integrates both bottom-up (unsupervised) and top-down (context and task-driven) saliency to guide attention. The proposed model enables onboard UGV perception systems to perform context-based and task-driven identification and temporally consistent labeling of objects of interest in real-time. The attention model will enable UGV systems to i) efficiently process incoming sensory data, ii) identify salient features in data streams, iii) automatically learn task relevance from observations, iv) adapt to new scenarios, and v) use spatio-temporal context and task relevance to improve recognition performance in complex environments. The Phase I effort will include; development of proposed attention framework, solution of UGV problems using the models, quantitative and qualitative evaluation of the proposed technologies, and demonstration of proof of concept using real-world data from multiple use-cases. The project will benefit from the UCSD's expertise in visual attention and salience and Novateur Research Solution's experience in sensor exploitation and onboard processing. |
Benefits: |
Automated UxV platforms have proven to be critical assets for intelligence, surveillance, and reconnaissance in remote or hostile environments. However, the utility and value of these assets significantly depend on their ability to perform well in unknown scenarios. This requires a system to automatically identify mission-relevant features from sensor streams and exploit these features to adapt to its current environment. Furthermore, the system must do this task while operating with limited size, power, and weight budget. Hence the development of methods that enable onboard real-time exploitation of sensor streams is critical for enabling truly intelligent autonomous systems able to operate in complex unknown environments. The proposed technologies facilitate a computational visual attention model that enables such systems and has wide ranging applications that include:
Object detection and identification
Visual search
Autonomous UGV navigation and map building
Terrain classification
Obstacle detection and avoidance
Navigation in GPS-denied environment and
Automated exploitation of mobile sensors.
Automated intelligence, surveillance and reconnaissance
The proposed technologies advance the state of the art in the DoD S&T emphasis area of Autonomy - The proposed visual attention framework will result in more capable and robust autonomous systems for military and civilian applications.
|
Return
|