A Survey on Automated Software Vulnerability Detection Using Machine
Learning and Deep Learning
- URL: http://arxiv.org/abs/2306.11673v1
- Date: Tue, 20 Jun 2023 16:51:59 GMT
- Title: A Survey on Automated Software Vulnerability Detection Using Machine
Learning and Deep Learning
- Authors: Nima Shiri Harzevili, Alvine Boaye Belle, Junjie Wang, Song Wang, Zhen
Ming (Jack) Jiang, Nachiappan Nagappan
- Abstract summary: Machine Learning (ML) and Deep Learning (DL) based models for detecting vulnerabilities in source code have been presented in recent years.
It may be difficult to discover gaps in existing research and potential for future improvement without a comprehensive survey.
This work address that gap by presenting a systematic survey to characterize various features of ML/DL-based source code level software vulnerability detection approaches.
- Score: 19.163031235081565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software vulnerability detection is critical in software security because it
identifies potential bugs in software systems, enabling immediate remediation
and mitigation measures to be implemented before they may be exploited.
Automatic vulnerability identification is important because it can evaluate
large codebases more efficiently than manual code auditing. Many Machine
Learning (ML) and Deep Learning (DL) based models for detecting vulnerabilities
in source code have been presented in recent years. However, a survey that
summarises, classifies, and analyses the application of ML/DL models for
vulnerability detection is missing. It may be difficult to discover gaps in
existing research and potential for future improvement without a comprehensive
survey. This could result in essential areas of research being overlooked or
under-represented, leading to a skewed understanding of the state of the art in
vulnerability detection. This work address that gap by presenting a systematic
survey to characterize various features of ML/DL-based source code level
software vulnerability detection approaches via five primary research questions
(RQs). Specifically, our RQ1 examines the trend of publications that leverage
ML/DL for vulnerability detection, including the evolution of research and the
distribution of publication venues. RQ2 describes vulnerability datasets used
by existing ML/DL-based models, including their sources, types, and
representations, as well as analyses of the embedding techniques used by these
approaches. RQ3 explores the model architectures and design assumptions of
ML/DL-based vulnerability detection approaches. RQ4 summarises the type and
frequency of vulnerabilities that are covered by existing studies. Lastly, RQ5
presents a list of current challenges to be researched and an outline of a
potential research roadmap that highlights crucial opportunities for future
work.
Related papers
- Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents [67.07177243654485]
This survey collects and analyzes the different threats faced by large language models-based agents.
We identify six key features of LLM-based agents, based on which we summarize the current research progress.
We select four representative agents as case studies to analyze the risks they may face in practical use.
arXiv Detail & Related papers (2024-11-14T15:40:04Z) - Outside the Comfort Zone: Analysing LLM Capabilities in Software Vulnerability Detection [9.652886240532741]
This paper thoroughly analyses large language models' capabilities in detecting vulnerabilities within source code.
We evaluate the performance of six open-source models that are specifically trained for vulnerability detection against six general-purpose LLMs.
arXiv Detail & Related papers (2024-08-29T10:00:57Z) - Exploring Automatic Cryptographic API Misuse Detection in the Era of LLMs [60.32717556756674]
This paper introduces a systematic evaluation framework to assess Large Language Models in detecting cryptographic misuses.
Our in-depth analysis of 11,940 LLM-generated reports highlights that the inherent instabilities in LLMs can lead to over half of the reports being false positives.
The optimized approach achieves a remarkable detection rate of nearly 90%, surpassing traditional methods and uncovering previously unknown misuses in established benchmarks.
arXiv Detail & Related papers (2024-07-23T15:31:26Z) - Towards Effectively Detecting and Explaining Vulnerabilities Using Large Language Models [17.96542494363619]
Large language models (LLMs) have demonstrated remarkable capabilities in comprehending complex contexts.
In this paper, we conduct a study to investigate the capabilities of LLMs in both detecting and explaining vulnerabilities.
Under specialized fine-tuning for vulnerability explanation, our LLMVulExp not only detects the types of vulnerabilities in the code but also analyzes the code context to generate the cause, location, and repair suggestions.
arXiv Detail & Related papers (2024-06-14T04:01:25Z) - VulDetectBench: Evaluating the Deep Capability of Vulnerability Detection with Large Language Models [12.465060623389151]
This study introduces a new benchmark, VulDetectBench, to assess the vulnerability detection capabilities of Large Language Models (LLMs)
The benchmark comprehensively evaluates LLM's ability to identify, classify, and locate vulnerabilities through five tasks of increasing difficulty.
Our benchmark effectively evaluates the capabilities of various LLMs at different levels in the specific task of vulnerability detection, providing a foundation for future research and improvements in this critical area of code security.
arXiv Detail & Related papers (2024-06-11T13:42:57Z) - Security Vulnerability Detection with Multitask Self-Instructed Fine-Tuning of Large Language Models [8.167614500821223]
We introduce MSIVD, multitask self-instructed fine-tuning for vulnerability detection, inspired by chain-of-thought prompting and LLM self-instruction.
Our experiments demonstrate that MSIVD achieves superior performance, outperforming the highest LLM-based vulnerability detector baseline (LineVul) with a F1 score of 0.92 on the BigVul dataset, and 0.48 on the PreciseBugs dataset.
arXiv Detail & Related papers (2024-06-09T19:18:05Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - How Far Have We Gone in Vulnerability Detection Using Large Language
Models [15.09461331135668]
We introduce a comprehensive vulnerability benchmark VulBench.
This benchmark aggregates high-quality data from a wide range of CTF challenges and real-world applications.
We find that several LLMs outperform traditional deep learning approaches in vulnerability detection.
arXiv Detail & Related papers (2023-11-21T08:20:39Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Anomalous Example Detection in Deep Learning: A Survey [98.2295889723002]
This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for Deep Learning applications.
We provide a taxonomy for existing techniques based on their underlying assumptions and adopted approaches.
We highlight the unsolved research challenges while applying anomaly detection techniques in DL systems and present some high-impact future research directions.
arXiv Detail & Related papers (2020-03-16T02:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.