Machine Learning Detection of Source Code Vulnerability

Navy SBIR 20.3 - Topic N203-151

Naval Information Warfare Systems Command (NAVWAR) - Mr. Shadi Azoum shadi.azoum@navy.mil

Opens: September 23, 2020 - Closes: October 22, 2020 (12:00 pm ET)

 

 

N203-151        TITLE: Machine Learning Detection of Source Code Vulnerability

 

RT&L FOCUS AREA(S): Artificial Intelligence/ Machine Learning, General Warfighting Requirements

TECHNOLOGY AREA(S): Information Systems

 

OBJECTIVE: Develop and demonstrate a software capability that utilizes machine-learning techniques to scan source code for its dependencies; trains cataloging algorithms on code dependencies and detection of known vulnerabilities, and scales to support polyglot architectures.

 

DESCRIPTION: Nearly every software library in the world is dependent on some other library, and the identification of security vulnerabilities on the entire corpus of these dependencies is an extremely challenging endeavor. As part of a Development, Security, and Operations (DevSecOps) process, this identification is typically accomplished using the following methods: (a) Using static code analyzers. This can be useful but is technically challenging to implement in large and complex legacy environments. They typically require setting up a build environment for each version to build call and control flow graphs, and are language-specific and thus do not work well when there are multiple versions of software using different dependency versions. (b) Using dynamic code review. This is extremely costly to implement, as it requires a complete setup of an isolated environment, including all applications and databases a project interacts with. (c) Using decompilation to perform static code analysis. This is again dependent on software version and is specific to the way machine-code is generated.

 

The above methods by themselves generate statistically significant numbers of false positives and false negatives: False positives come from the erroneous detection of vulnerabilities and require a human in the loop to discern signal from noise. False negatives come from the prevalence of undetected altered dependent software (e.g., copy/paste/change from external libraries).

 

Promising developments from commercial vendors provide text mining services for project source trees and compare them against vulnerability databases, such as Synopsis/Blackduck Hub, IBM AppScan, and Facebook’s Infer. However, these tools are costly to use and require the packaging of one’s code to be uploaded to a third-party service.

 

Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence Security Agency (DCSA). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances, in order to perform on advanced phases of this project as set forth by DCSA and NAVWAR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.

 

PHASE I: Develop a concept for a design for a software utility that:

 

 

The feasibility study must show that the software utility can easily integrate into existing Continuous Integration/Continuous Development ( CI/CD) DevSecOps tools. Metrics for accuracy, scalability, and speed must also be provided. Develop integration plans for Phase II.

 

NOTE: Detailed knowledge of Navy data sources may not be necessary during Phase I if the performer can show the above. It is recommended to use publicly available open-source software repositories. For example, the Linux kernel, or the Chromium project, and leverage, for example, the National Vulnerability Database or Common Vulnerabilities and Exposures databases.

 

PHASE II: Develop, demonstrate, validate, and mature the Phase I-developed concepts into prototype software. Work with the Government to establish metrics and acceptance testing for the bullets listed in Phase I.

 

 

It is probable that the work under this effort will be classified under Phase II (see Description for details).

 

PHASE III DUAL USE APPLICATIONS: Integrate the service into an existing Navy CI/CD DevSecOps process:

 

 

Any commercial organization, private or public (e.g., Transportation, Medical Device Development, and/or the FDA), that does software verification and validation should be able to leverage the service.

 

REFERENCES:

1.       Kratkiewicz, K. “Evaluating Static Analysis Tools for Detecting Buffer Overflows in C Code.” Harvard University, Cambridge, MA, 2005. https://apps.dtic.mil/dtic/tr/fulltext/u2/a511392.pdf    

2.       Meng, et al. “Assisting in Auditing of Buffer Overflow Vulnerabilities via Machine Learning.” Mathematical Problems in Engineering, 2017. http://downloads.hindawi.com/journals/mpe/2017/5452396.pdf  

3.       Jaspan, et al. “Advantages and Disadvantages of a Monolithic Repository: A Case Study at Google.” Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice, 2018, pp. 225-234. https://dl.acm.org/doi/pdf/10.1145/3183519.3183550  

4.       Lopes, et al. “DéjàVu: A Map of Code Duplicates on GitHub.” Proceedings of the ACM on Programming Languages, 1(OOPSLA), 2017, pp. 1-28. http://dl.acm.org/doi/pdf/10.1145/3133908   

5.       Russell, et al. “Automated Vulnerability Detection in Source Code Using Deep Representation Learning.” 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 757-762. http://arxiv.org/pdf/1807.04320.pdf  

6.       Website of the National Institute of Standards and Technology, Information Technology Laboratory, Software and Systems Division. “Source Code Security Analyzers.” https://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html

 

KEYWORDS: DevSecOps; Continuous Integration; Continuous Deployment; Software; Vulnerabilities; Legacy Code; Software Scanning; Vulnerability Databases; Development, Security and Operations

 

 

** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the overall DoD 20.3 SBIR BAA. Please see the official DoD DSIP Topic website at rt.cto.mil/rtl-small-business-resources/sbir-sttr/ for any updates. The DoD issued its 20.3 SBIR BAA on August 25, 2020, which opens to receive proposals on September 25, 2020, and closes October 22, 2020 at 12:00 noon ET.

Direct Contact with Topic Authors: During the pre-release period (August 25 to September 22, 2020) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic.

Questions should be limited to specific information related to improving the understanding of a particular topic’s requirements. Proposing firms may not ask for advice or guidance on solution approach and you may not submit additional material to the topic author. If information provided during an exchange with the topic author is deemed necessary for proposal preparation, that information will be made available to all parties through SITIS (SBIR/STTR Interactive Topic Information System). After the pre-release period, questions must be asked through the SITIS on-line system as described on the DoD's DSIP website at www.dodsbirsttr.mil/submissions/login.

Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 703-214-1333 or via email at DoDSBIRSupport@reisystems.com

Return