Hyperfuzzing: black-box security hypertesting with a grey-box fuzzer
- URL: http://arxiv.org/abs/2308.09081v1
- Date: Thu, 17 Aug 2023 16:15:02 GMT
- Title: Hyperfuzzing: black-box security hypertesting with a grey-box fuzzer
- Authors: Daniel Blackwell, Ingolf Becker, David Clark
- Abstract summary: LeakFuzzer advances the state of the art by using a noninterference security property together with a security flow policy as an oracle.
LeakFuzzer can detect the same set of errors that a normal fuzzer can detect, with the addition of being able to detect violations of secure information flow policies.
- Score: 2.2000560351723504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Information leakage is a class of error that can lead to severe consequences.
However unlike other errors, it is rarely explicitly considered during the
software testing process. LeakFuzzer advances the state of the art by using a
noninterference security property together with a security flow policy as an
oracle. As the tool extends the state of the art fuzzer, AFL++, LeakFuzzer
inherits the advantages of AFL++ such as scalability, automated input
generation, high coverage and low developer intervention.
The tool can detect the same set of errors that a normal fuzzer can detect,
with the addition of being able to detect violations of secure information flow
policies.
We evaluated LeakFuzzer on a diverse set of 10 C and C++ benchmarks
containing known information leaks, ranging in size from just 80 to over 900k
lines of code. Seven of these are taken from real-world CVEs including
Heartbleed and a recent error in PostgreSQL. Given 20 24-hour runs, LeakFuzzer
can find 100% of the leaks in the SUTs whereas existing techniques using such
as the CBMC model checker and AFL++ augmented with different sanitizers can
only find 40% at best.
Related papers
- Your Fix Is My Exploit: Enabling Comprehensive DL Library API Fuzzing with Large Language Models [49.214291813478695]
Deep learning (DL) libraries, widely used in AI applications, often contain vulnerabilities like overflows and use buffer-free errors.
Traditional fuzzing struggles with the complexity and API diversity of DL libraries.
We propose DFUZZ, an LLM-driven fuzzing approach for DL libraries.
arXiv Detail & Related papers (2025-01-08T07:07:22Z) - Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense [55.77152277982117]
We introduce Layer-AdvPatcher, a methodology designed to defend against jailbreak attacks.
We use an unlearning strategy to patch specific layers within large language models through self-augmented datasets.
Our framework reduces the harmfulness and attack success rate of jailbreak attacks.
arXiv Detail & Related papers (2025-01-05T19:06:03Z) - CKGFuzzer: LLM-Based Fuzz Driver Generation Enhanced By Code Knowledge Graph [29.490817477791357]
We propose an automated fuzz testing method driven by a code knowledge graph and powered by an intelligent agent system.
The code knowledge graph is constructed through interprocedural program analysis, where each node in the graph represents a code entity.
CKGFuzzer achieved an average improvement of 8.73% in code coverage compared to state-of-the-art techniques.
arXiv Detail & Related papers (2024-11-18T12:41:16Z) - Fixing Security Vulnerabilities with AI in OSS-Fuzz [9.730566646484304]
OSS-Fuzz is the most significant and widely used infrastructure for continuous validation of open source systems.
We customise the well-known AutoCodeRover agent for fixing security vulnerabilities.
Our experience with OSS-Fuzz vulnerability data shows that LLM agent autonomy is useful for successful security patching.
arXiv Detail & Related papers (2024-11-03T16:20:32Z) - Pipe-Cleaner: Flexible Fuzzing Using Security Policies [0.07499722271664144]
Pipe-Cleaner is a system for detecting and analyzing C code vulnerabilities.
It is based on flexible developer-designed security policies enforced by a tag-based runtime reference monitor.
We demonstrate the potential of this approach on several heap-related security vulnerabilities.
arXiv Detail & Related papers (2024-10-31T23:35:22Z) - FuzzTheREST: An Intelligent Automated Black-box RESTful API Fuzzer [0.0]
This work introduces a black-box API of fuzzy testing tool that employs Reinforcement Learning (RL) for vulnerability detection.
The tool found a total of six unique vulnerabilities and achieved 55% code coverage.
arXiv Detail & Related papers (2024-07-19T14:43:35Z) - WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models [66.34505141027624]
We introduce WildTeaming, an automatic LLM safety red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics.
WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in up to 4.6x more diverse and successful adversarial attacks.
arXiv Detail & Related papers (2024-06-26T17:31:22Z) - FV8: A Forced Execution JavaScript Engine for Detecting Evasive Techniques [53.288368877654705]
FV8 is a modified V8 JavaScript engine designed to identify evasion techniques in JavaScript code.
It selectively enforces code execution on APIs that conditionally inject dynamic code.
It identifies 1,443 npm packages and 164 (82%) extensions containing at least one type of evasion.
arXiv Detail & Related papers (2024-05-21T19:54:19Z) - Beyond Random Inputs: A Novel ML-Based Hardware Fuzzing [16.22481369547266]
Hardware fuzzing is an effective approach to exploring and detecting security vulnerabilities in large-scale designs like modern processors.
We propose a novel ML-based hardware fuzzer, ChatFuzz, to address this challenge.
ChatFuzz achieves condition coverage rate of 75% in just 52 minutes compared to a state-of-the-art fuzzer.
arXiv Detail & Related papers (2024-04-10T09:28:54Z) - Zero-Shot Detection of Machine-Generated Codes [83.0342513054389]
This work proposes a training-free approach for the detection of LLMs-generated codes.
We find that existing training-based or zero-shot text detectors are ineffective in detecting code.
Our method exhibits robustness against revision attacks and generalizes well to Java codes.
arXiv Detail & Related papers (2023-10-08T10:08:21Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.