Autosploit: A Fully Automated Framework for Evaluating the
Exploitability of Security Vulnerabilities
- URL: http://arxiv.org/abs/2007.00059v1
- Date: Tue, 30 Jun 2020 18:49:18 GMT
- Title: Autosploit: A Fully Automated Framework for Evaluating the
Exploitability of Security Vulnerabilities
- Authors: Noam Moscovich, Ron Bitton, Yakov Mallah, Masaki Inokuchi, Tomohiko
Yagyu, Meir Kalech, Yuval Elovici, Asaf Shabtai
- Abstract summary: Autosploit is an automated framework for evaluating the exploitability of vulnerabilities.
It automatically tests the exploits on different configurations of the environment.
It is able to identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.
- Score: 47.748732208602355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existence of a security vulnerability in a system does not necessarily
mean that it can be exploited. In this research, we introduce Autosploit -- an
automated framework for evaluating the exploitability of vulnerabilities. Given
a vulnerable environment and relevant exploits, Autosploit will automatically
test the exploits on different configurations of the environment in order to
identify the specific properties necessary for successful exploitation of the
existing vulnerabilities. Since testing all possible system configurations is
infeasible, we introduce an efficient approach for testing and searching
through all possible configurations of the environment. The efficient testing
process implemented by Autosploit is based on two algorithms: generalized
binary splitting and Barinel, which are used for noiseless and noisy
environments respectively. We implemented the proposed framework and evaluated
it using real vulnerabilities. The results show that Autosploit is able to
automatically identify the system properties that affect the ability to exploit
a vulnerability in both noiseless and noisy environments. These important
results can be utilized for more accurate and effective risk assessment.
Related papers
- AutoPT: How Far Are We from the End2End Automated Web Penetration Testing? [54.65079443902714]
We introduce AutoPT, an automated penetration testing agent based on the principle of PSM driven by LLMs.
Our results show that AutoPT outperforms the baseline framework ReAct on the GPT-4o mini model.
arXiv Detail & Related papers (2024-11-02T13:24:30Z) - RealVul: Can We Detect Vulnerabilities in Web Applications with LLM? [4.467475584754677]
We present RealVul, the first LLM-based framework designed for PHP vulnerability detection.
We can isolate potential vulnerability triggers while streamlining the code and eliminating unnecessary semantic information.
We also address the issue of insufficient PHP vulnerability samples by improving data synthesis methods.
arXiv Detail & Related papers (2024-10-10T03:16:34Z) - SETC: A Vulnerability Telemetry Collection Framework [0.0]
This paper introduces the Security Exploit Telemetry Collection (SETC) framework.
SETC generates reproducible vulnerability exploit data at scale for robust defensive security research.
This research enables scalable exploit data generation to drive innovations in threat modeling, detection methods, analysis techniques, and strategies.
arXiv Detail & Related papers (2024-06-10T00:13:35Z) - VulEval: Towards Repository-Level Evaluation of Software Vulnerability Detection [14.312197590230994]
repository-level evaluation system named textbfVulEval aims at evaluating the detection performance of inter- and intra-procedural vulnerabilities simultaneously.
VulEval consists of a large-scale dataset, with a total of 4,196 CVE entries, 232,239 functions, and corresponding 4,699 repository-level source code in C/C++ programming languages.
arXiv Detail & Related papers (2024-04-24T02:16:11Z) - On the Effectiveness of Function-Level Vulnerability Detectors for
Inter-Procedural Vulnerabilities [28.57872406228216]
We propose a tool dubbed VulTrigger for identifying vulnerability-triggering statements across functions.
Experimental results show that VulTrigger can effectively identify vulnerability-triggering statements and inter-procedural vulnerabilities.
Our findings include: (i) inter-procedural vulnerabilities are prevalent with an average of 2.8 inter-procedural layers; and (ii) function-level vulnerability detectors are much less effective in detecting to-be-patched functions of inter-procedural vulnerabilities.
arXiv Detail & Related papers (2024-01-18T07:32:11Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Adaptive Failure Search Using Critical States from Domain Experts [9.93890332477992]
Failure search may be done through logging substantial vehicle miles in either simulation or real world testing.
AST is one such method that poses the problem of failure search as a Markov decision process.
We show that the incorporation of critical states into the AST framework generates failure scenarios with increased safety violations.
arXiv Detail & Related papers (2023-04-01T18:14:41Z) - Sample-Efficient Safety Assurances using Conformal Prediction [57.92013073974406]
Early warning systems can provide alerts when an unsafe situation is imminent.
To reliably improve safety, these warning systems should have a provable false negative rate.
We present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics.
arXiv Detail & Related papers (2021-09-28T23:00:30Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.