A Systematic Literature Review on Automated Exploit and Security Test Generation
- URL: http://arxiv.org/abs/2502.04953v1
- Date: Fri, 07 Feb 2025 14:19:56 GMT
- Title: A Systematic Literature Review on Automated Exploit and Security Test Generation
- Authors: Quang-Cuong Bui, Emanuele Iannone, Maria Camporese, Torge Hinrichs, Catherine Tony, László Tóth, Fabio Palomba, Péter Hegedűs, Fabio Massacci, Riccardo Scandariato,
- Abstract summary: We identify a list of exploit generation techniques from literature and group them into four categories: automated exploit generation, security testing, fuzzing, and other techniques.
Most of the techniques focus on the memory-based vulnerabilities in C/C++ programs and web-based injection vulnerabilities in PHP and Java applications.
- Score: 14.528323751981794
- License:
- Abstract: The exploit or the Proof of Concept of the vulnerability plays an important role in developing superior vulnerability repair techniques, as it can be used as an oracle to verify the correctness of the patches generated by the tools. However, the vulnerability exploits are often unavailable and require time and expert knowledge to craft. Obtaining them from the exploit generation techniques is another potential solution. The goal of this survey is to aid the researchers and practitioners in understanding the existing techniques for exploit generation through the analysis of their characteristics and their usability in practice. We identify a list of exploit generation techniques from literature and group them into four categories: automated exploit generation, security testing, fuzzing, and other techniques. Most of the techniques focus on the memory-based vulnerabilities in C/C++ programs and web-based injection vulnerabilities in PHP and Java applications. We found only a few studies that publicly provided usable tools associated with their techniques.
Related papers
- CRepair: CVAE-based Automatic Vulnerability Repair Technology [1.147605955490786]
Software vulnerabilities pose significant threats to the integrity, security, and reliability of modern software and its application data.
To address the challenges of vulnerability repair, researchers have proposed various solutions, with learning-based automatic vulnerability repair techniques gaining widespread attention.
This paper proposes CRepair, a CVAE-based automatic vulnerability repair technology aimed at fixing security vulnerabilities in system code.
arXiv Detail & Related papers (2024-11-08T12:55:04Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Your Instructions Are Not Always Helpful: Assessing the Efficacy of
Instruction Fine-tuning for Software Vulnerability Detection [9.763041664345105]
Software poses potential cybersecurity risks due to inherent vulnerabilities.
Deep learning has shown promise as an effective tool for this task due to its ability to perform well without extensive feature engineering.
Recent research highlights the deep learning efficacy in diverse tasks.
This paper investigates the capability of models, specifically a recent language model, to generalize beyond the programming languages used in their training data.
arXiv Detail & Related papers (2024-01-15T04:45:27Z) - Causative Insights into Open Source Software Security using Large
Language Code Embeddings and Semantic Vulnerability Graph [3.623199159688412]
Open Source Software (OSS) vulnerabilities can cause unauthorized access, data breaches, network disruptions, and privacy violations.
Recent deep-learning techniques have shown great promise in identifying and localizing vulnerabilities in source code.
Our study shows a 24% improvement in code repair capabilities compared to previous methods.
arXiv Detail & Related papers (2024-01-13T10:33:22Z) - Understanding Hackers' Work: An Empirical Study of Offensive Security
Practitioners [0.0]
Offensive security-tests are a common way to pro-actively discover potential vulnerabilities.
The chronic lack of available white-hat hackers prevents sufficient security test coverage of software.
Research into automation tries to alleviate this problem by improving the efficiency of security testing.
arXiv Detail & Related papers (2023-08-14T10:35:26Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Attack Techniques and Threat Identification for Vulnerabilities [1.1689657956099035]
prioritization and focus become critical, to spend their limited time on the highest risk vulnerabilities.
In this work, we use machine learning and natural language processing techniques, as well as several publicly available data sets.
We first map the vulnerabilities to a standard set of common weaknesses, and then common weaknesses to the attack techniques.
This approach yields a Mean Reciprocal Rank (MRR) of 0.95, an accuracy comparable with those reported for state-of-the-art systems.
arXiv Detail & Related papers (2022-06-22T15:27:49Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Autosploit: A Fully Automated Framework for Evaluating the
Exploitability of Security Vulnerabilities [47.748732208602355]
Autosploit is an automated framework for evaluating the exploitability of vulnerabilities.
It automatically tests the exploits on different configurations of the environment.
It is able to identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.
arXiv Detail & Related papers (2020-06-30T18:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.