Hyperfuzzing: black-box security hypertesting with a grey-box fuzzer
- URL: http://arxiv.org/abs/2308.09081v1
- Date: Thu, 17 Aug 2023 16:15:02 GMT
- Title: Hyperfuzzing: black-box security hypertesting with a grey-box fuzzer
- Authors: Daniel Blackwell, Ingolf Becker, David Clark
- Abstract summary: LeakFuzzer advances the state of the art by using a noninterference security property together with a security flow policy as an oracle.
LeakFuzzer can detect the same set of errors that a normal fuzzer can detect, with the addition of being able to detect violations of secure information flow policies.
- Score: 2.2000560351723504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Information leakage is a class of error that can lead to severe consequences.
However unlike other errors, it is rarely explicitly considered during the
software testing process. LeakFuzzer advances the state of the art by using a
noninterference security property together with a security flow policy as an
oracle. As the tool extends the state of the art fuzzer, AFL++, LeakFuzzer
inherits the advantages of AFL++ such as scalability, automated input
generation, high coverage and low developer intervention.
The tool can detect the same set of errors that a normal fuzzer can detect,
with the addition of being able to detect violations of secure information flow
policies.
We evaluated LeakFuzzer on a diverse set of 10 C and C++ benchmarks
containing known information leaks, ranging in size from just 80 to over 900k
lines of code. Seven of these are taken from real-world CVEs including
Heartbleed and a recent error in PostgreSQL. Given 20 24-hour runs, LeakFuzzer
can find 100% of the leaks in the SUTs whereas existing techniques using such
as the CBMC model checker and AFL++ augmented with different sanitizers can
only find 40% at best.
Related papers
- Fixing Security Vulnerabilities with AI in OSS-Fuzz [9.730566646484304]
OSS-Fuzz is the most significant and widely used infrastructure for continuous validation of open source systems.
We customise the well-known AutoCodeRover agent for fixing security vulnerabilities.
Our experience with OSS-Fuzz vulnerability data shows that LLM agent autonomy is useful for successful security patching.
arXiv Detail & Related papers (2024-11-03T16:20:32Z) - Pipe-Cleaner: Flexible Fuzzing Using Security Policies [0.07499722271664144]
Pipe-Cleaner is a system for detecting and analyzing C code vulnerabilities.
It is based on flexible developer-designed security policies enforced by a tag-based runtime reference monitor.
We demonstrate the potential of this approach on several heap-related security vulnerabilities.
arXiv Detail & Related papers (2024-10-31T23:35:22Z) - FuzzTheREST: An Intelligent Automated Black-box RESTful API Fuzzer [0.0]
This work introduces a black-box API of fuzzy testing tool that employs Reinforcement Learning (RL) for vulnerability detection.
The tool found a total of six unique vulnerabilities and achieved 55% code coverage.
arXiv Detail & Related papers (2024-07-19T14:43:35Z) - WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models [66.34505141027624]
We introduce WildTeaming, an automatic LLM safety red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics.
WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in up to 4.6x more diverse and successful adversarial attacks.
arXiv Detail & Related papers (2024-06-26T17:31:22Z) - FV8: A Forced Execution JavaScript Engine for Detecting Evasive Techniques [53.288368877654705]
FV8 is a modified V8 JavaScript engine designed to identify evasion techniques in JavaScript code.
It selectively enforces code execution on APIs that conditionally inject dynamic code.
It identifies 1,443 npm packages and 164 (82%) extensions containing at least one type of evasion.
arXiv Detail & Related papers (2024-05-21T19:54:19Z) - Beyond Random Inputs: A Novel ML-Based Hardware Fuzzing [16.22481369547266]
Hardware fuzzing is an effective approach to exploring and detecting security vulnerabilities in large-scale designs like modern processors.
We propose a novel ML-based hardware fuzzer, ChatFuzz, to address this challenge.
ChatFuzz achieves condition coverage rate of 75% in just 52 minutes compared to a state-of-the-art fuzzer.
arXiv Detail & Related papers (2024-04-10T09:28:54Z) - JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models [123.66104233291065]
Jailbreak attacks cause large language models (LLMs) to generate harmful, unethical, or otherwise objectionable content.
evaluating these attacks presents a number of challenges, which the current collection of benchmarks and evaluation techniques do not adequately address.
JailbreakBench is an open-sourced benchmark with the following components.
arXiv Detail & Related papers (2024-03-28T02:44:02Z) - Zero-Shot Detection of Machine-Generated Codes [83.0342513054389]
This work proposes a training-free approach for the detection of LLMs-generated codes.
We find that existing training-based or zero-shot text detectors are ineffective in detecting code.
Our method exhibits robustness against revision attacks and generalizes well to Java codes.
arXiv Detail & Related papers (2023-10-08T10:08:21Z) - EDEFuzz: A Web API Fuzzer for Excessive Data Exposures [3.5061201620029885]
Excessive Data Exposure (EDE) was the third most significant API vulnerability of 2019.
There are few automated tools -- either in research or industry -- to effectively find and remediate such issues.
We build the first fuzzing tool -- that we call EDEFuzz -- to systematically detect EDEs.
arXiv Detail & Related papers (2023-01-23T04:05:08Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.