PrescientFuzz: A more effective exploration approach for grey-box fuzzing
- URL: http://arxiv.org/abs/2404.18887v2
- Date: Fri, 24 Jan 2025 14:54:18 GMT
- Title: PrescientFuzz: A more effective exploration approach for grey-box fuzzing
- Authors: Daniel Blackwell, David Clark,
- Abstract summary: We produce an augmented version of LibAFL's fuzzbench' fuzzer, called PrescientFuzz, that makes use of semantic information from the target program's control flow graph (CFG)
We develop an input corpus scheduler that prioritises the selection of inputs for mutation based on the proximity of their execution path to uncovered edges.
- Score: 0.45053464397400894
- License:
- Abstract: Since the advent of AFL, the use of mutational, feedback directed, grey-box fuzzers has become critical in the automated detection of security vulnerabilities. A great deal of research currently goes into their optimisation, including improving the rate at which they achieve branch coverage early in a campaign. We produce an augmented version of LibAFL's `fuzzbench' fuzzer, called PrescientFuzz, that makes use of semantic information from the target program's control flow graph (CFG). We develop an input corpus scheduler that prioritises the selection of inputs for mutation based on the proximity of their execution path to uncovered edges. Simple as this idea is, PrescientFuzz leads all fuzzers using the Google FuzzBench at the time of writing -- in both average code coverage and average ranking, across the benchmark SUTs. Whilst the existence of uncovered edges in the CFG does not guarantee their feasibility, the improvement in coverage over the state-of-the-art fuzzers suggests that this is not an issue in practice.
Related papers
- ISC4DGF: Enhancing Directed Grey-box Fuzzing with LLM-Driven Initial Seed Corpus Generation [32.6118621456906]
directed grey-box fuzzing (DGF) has become essential, focusing on specific vulnerabilities.
ISC4DGF generates optimized initial seed corpus for DGF using Large Language Models (LLMs)
ISC4DGF achieved a 35.63x speedup and 616.10x fewer target reaches.
arXiv Detail & Related papers (2024-09-22T06:27:28Z) - G-Fuzz: A Directed Fuzzing Framework for gVisor [48.85077340822625]
G-Fuzz is a directed fuzzing framework for gVisor.
G-Fuzz has been deployed in industry and has detected multiple serious vulnerabilities.
arXiv Detail & Related papers (2024-09-20T01:00:22Z) - FuzzCoder: Byte-level Fuzzing Test via Large Language Model [46.18191648883695]
We propose to adopt fine-tuned large language models (FuzzCoder) to learn patterns in the input files from successful attacks.
FuzzCoder can predict mutation locations and strategies locations in input files to trigger abnormal behaviors of the program.
arXiv Detail & Related papers (2024-09-03T14:40:31Z) - FOX: Coverage-guided Fuzzing as Online Stochastic Control [13.3158115776899]
Fuzzing is an effective technique for discovering software vulnerabilities by generating random test inputs executing them against the target program.
This paper addresses the limitations of existing coverage-guided fuzzers, focusing on the scheduler and mutator components.
We present FOX, a proof-of-concept implementation of our control-theoretic approach, and compare it to industry-standard fuzzers.
arXiv Detail & Related papers (2024-06-06T21:21:05Z) - Generator-Based Fuzzers with Type-Based Targeted Mutation [1.4507298892594764]
In previous work, coverage-guided fuzzers used a mix of static analysis, taint analysis, and constraint-solving approaches to address this problem.
In this paper, we introduce a type-based mutation, along with constant string lookup, for Java GBF.
Results compared to a baseline GBF tool show an almost 20% average improvement in application coverage, and larger improvements when third-party code is included.
arXiv Detail & Related papers (2024-06-04T07:20:13Z) - CovRL: Fuzzing JavaScript Engines with Coverage-Guided Reinforcement
Learning for LLM-based Mutation [2.5864634852960444]
This paper presents a novel technique called CovRL (Coverage-guided Reinforcement Learning) that combines Large Language Models (LLMs) with reinforcement learning from coverage feedback.
CovRL-Fuzz identifies 48 real-world security-related bugs in the latest JavaScript engines, including 39 previously unknown vulnerabilities and 11 CVEs.
arXiv Detail & Related papers (2024-02-19T15:30:40Z) - Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Fuzzing with Quantitative and Adaptive Hot-Bytes Identification [6.442499249981947]
American fuzzy lop, a leading fuzzing tool, has demonstrated its powerful bug finding ability through a vast number of reported CVEs.
We propose an approach called toolwhich is designed based on the following principles.
Our evaluation results on 10 real-world programs and LAVA-M dataset show that toolachieves sustained increases in branch coverage and discovers more bugs than other fuzzers.
arXiv Detail & Related papers (2023-07-05T13:41:35Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks [86.88061841975482]
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle.
We use this setting to find fast one-step adversarial attacks, akin to a black-box version of the Fast Gradient Sign Method(FGSM)
We show that the method uses fewer queries and achieves higher attack success rates than the current state of the art.
arXiv Detail & Related papers (2020-10-08T18:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.