PBFuzz: Agentic Directed Fuzzing for PoV Generation
- URL: http://arxiv.org/abs/2512.04611v1
- Date: Thu, 04 Dec 2025 09:34:22 GMT
- Title: PBFuzz: Agentic Directed Fuzzing for PoV Generation
- Authors: Haochen Zeng, Andrew Bao, Jiajun Cheng, Chengyu Song,
- Abstract summary: We develop an agentic directed fuzzing framework called PBFuzz.<n>PBFuzz tackles four challenges in PoV generation: autonomous code reasoning for semantic constraint extraction, custom program-analysis tools for targeted inference, persistent memory to avoid hypothesis drift, and property-based testing.<n>Experiments show PBFuzz triggered 57 vulnerabilities, surpassing all baselines, and uniquely triggered 17 vulnerabilities not exposed by existing fuzzers.
- Score: 6.90561548463863
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Proof-of-Vulnerability (PoV) input generation is a critical task in software security and supports downstream applications such as path generation and validation. Generating a PoV input requires solving two sets of constraints: (1) reachability constraints for reaching vulnerable code locations, and (2) triggering constraints for activating the target vulnerability. Existing approaches, including directed greybox fuzzing and LLM-assisted fuzzing, struggle to efficiently satisfy these constraints. This work presents an agentic method that mimics human experts. Human analysts iteratively study code to extract semantic reachability and triggering constraints, form hypotheses about PoV triggering strategies, encode them as test inputs, and refine their understanding using debugging feedback. We automate this process with an agentic directed fuzzing framework called PBFuzz. PBFuzz tackles four challenges in agentic PoV generation: autonomous code reasoning for semantic constraint extraction, custom program-analysis tools for targeted inference, persistent memory to avoid hypothesis drift, and property-based testing for efficient constraint solving while preserving input structure. Experiments on the Magma benchmark show strong results. PBFuzz triggered 57 vulnerabilities, surpassing all baselines, and uniquely triggered 17 vulnerabilities not exposed by existing fuzzers. PBFuzz achieved this within a 30-minute budget per target, while conventional approaches use 24 hours. Median time-to-exposure was 339 seconds for PBFuzz versus 8680 seconds for AFL++ with CmpLog, giving a 25.6x efficiency improvement with an API cost of 1.83 USD per vulnerability.
Related papers
- Execution-State-Aware LLM Reasoning for Automated Proof-of-Vulnerability Generation [36.950993500170014]
We present DrillAgent, an agentic framework that reformulates PoV generation as an iterative hypothesis-verification-refinement process.<n>We evaluate DrillAgent on SEC-bench, a large-scale benchmark of real-world C/C++ vulnerabilities.
arXiv Detail & Related papers (2026-02-14T03:17:27Z) - MulVul: Retrieval-augmented Multi-Agent Code Vulnerability Detection via Cross-Model Prompt Evolution [28.062506040151153]
Large Language Models (LLMs) struggle to automate real-world vulnerability detection due to two key limitations.<n>The heterogeneity of vulnerability patterns undermines the effectiveness of a single unified model, and manual prompt engineering for massive weakness categories is unscalable.<n>We propose textbfMulVul, a retrieval-augmented multi-agent framework for precise and broad-coverage vulnerability detection.
arXiv Detail & Related papers (2026-01-26T12:43:10Z) - Sponge Tool Attack: Stealthy Denial-of-Efficiency against Tool-Augmented Agentic Reasoning [58.432996881401415]
Recent work augments large language models (LLMs) with external tools to enable agentic reasoning.<n>We propose Sponge Tool Attack (STA), which disrupts agentic reasoning solely by rewriting the input prompt.<n>STA generates benign-looking prompt rewrites from the original one with high semantic fidelity.
arXiv Detail & Related papers (2026-01-24T19:36:51Z) - Training-Free Loosely Speculative Decoding: Accepting Semantically Correct Drafts Beyond Exact Match [21.810129153556044]
Training-Free Loosely Speculative Decoding (FLy) is a novel method that loosens the rigid verification criterion.<n>We show that FLy preserves more than 99% of the target model's accuracy while achieving an average 2.81x speedup.
arXiv Detail & Related papers (2025-11-28T08:23:30Z) - Backdoor Collapse: Eliminating Unknown Threats via Known Backdoor Aggregation in Language Models [75.29749026964154]
Ourmethod reduces the average Attack Success Rate to 4.41% across multiple benchmarks.<n>Clean accuracy and utility are preserved within 0.5% of the original model.<n>The defense generalizes across different types of backdoors, confirming its robustness in practical deployment scenarios.
arXiv Detail & Related papers (2025-10-11T15:47:35Z) - DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models [50.21378052667732]
We conduct an in-depth analysis of dLLM vulnerabilities to jailbreak attacks across two distinct dimensions: intra-step and inter-step dynamics.<n>We propose DiffuGuard, a training-free defense framework that addresses vulnerabilities through a dual-stage approach.
arXiv Detail & Related papers (2025-09-29T05:17:10Z) - VulAgent: Hypothesis-Validation based Multi-Agent Vulnerability Detection [55.957275374847484]
VulAgent is a multi-agent vulnerability detection framework based on hypothesis validation.<n>It implements a semantics-sensitive, multi-view detection pipeline, each aligned to a specific analysis perspective.<n>On average, VulAgent improves overall accuracy by 6.6%, increases the correct identification rate of vulnerable--fixed code pairs by up to 450%, and reduces the false positive rate by about 36%.
arXiv Detail & Related papers (2025-09-15T02:25:38Z) - Weakly Supervised Vulnerability Localization via Multiple Instance Learning [46.980136742826836]
We propose a novel approach called WAVES for WeAkly supervised Vulnerability localization via multiplE inStance learning.<n>WAVES has the capability to determine whether a function is vulnerable (i.e., vulnerability detection) and pinpoint the vulnerable statements.<n>Our approach achieves comparable performance in vulnerability detection and state-of-the-art performance in statement-level vulnerability localization.
arXiv Detail & Related papers (2025-09-14T15:11:39Z) - FaultLine: Automated Proof-of-Vulnerability Generation Using LLM Agents [17.658431034176065]
FaultLine is an agent workflow that automatically generates proof-of-vulnerability (PoV) test cases.<n>It does not use language-specific static or dynamic analysis components, which enables it to be used across programming languages.<n>On a dataset of 100 known vulnerabilities in Java, C and C++ projects, FaultLine is able to generate PoV tests for 16 projects, compared to just 9 for CodeAct 2.1.
arXiv Detail & Related papers (2025-07-21T04:55:34Z) - LLAMA: Multi-Feedback Smart Contract Fuzzing Framework with LLM-Guided Seed Generation [56.84049855266145]
We propose a Multi-feedback Smart Contract Fuzzing framework (LLAMA) that integrates evolutionary mutation strategies, and hybrid testing techniques.<n>LLAMA achieves 91% instruction coverage and 90% branch coverage, while detecting 132 out of 148 known vulnerabilities.<n>These results highlight LLAMA's effectiveness, adaptability, and practicality in real-world smart contract security testing scenarios.
arXiv Detail & Related papers (2025-07-16T09:46:58Z) - DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents [52.92354372596197]
Large Language Models (LLMs) are increasingly central to agentic systems due to their strong reasoning and planning capabilities.<n>This interaction also introduces the risk of prompt injection attacks, where malicious inputs from external sources can mislead the agent's behavior.<n>We propose a Dynamic Rule-based Isolation Framework for Trustworthy agentic systems, which enforces both control and data-level constraints.
arXiv Detail & Related papers (2025-06-13T05:01:09Z) - Directed Greybox Fuzzing via Large Language Model [5.667013605202579]
HGFuzzer is an automatic framework that transforms path constraint problems into targeted code generation tasks.<n>We evaluate HGFuzzer on 20 real-world vulnerabilities, successfully triggering 17, including 11 within the first minute.<n>HGFuzzer discovered 9 previously unknown vulnerabilities, all of which were assigned CVE IDs.
arXiv Detail & Related papers (2025-05-06T11:04:07Z) - AI-Based Vulnerability Analysis of NFT Smart Contracts [6.378351117969227]
This study proposes an AI-driven approach to detect vulnerabilities in NFT smart contracts.<n>We collected 16,527 public smart contract codes, classifying them into five vulnerability categories: Risky Mutable Proxy, ERC-721 Reentrancy, Unlimited Minting, Missing Requirements, and Public Burn.<n>A random forest model was implemented to improve robustness through random data/feature sampling and multitree integration.
arXiv Detail & Related papers (2025-04-18T08:55:31Z) - MIBP-Cert: Certified Training against Data Perturbations with Mixed-Integer Bilinear Programs [50.41998220099097]
Data errors, corruptions, and poisoning attacks during training pose a major threat to the reliability of modern AI systems.<n>We introduce MIBP-Cert, a novel certification method based on mixed-integer bilinear programming (MIBP)<n>By computing the set of parameters reachable through perturbed or manipulated data, we can predict all possible outcomes and guarantee robustness.
arXiv Detail & Related papers (2024-12-13T14:56:39Z) - RSFuzz: A Robustness-Guided Swarm Fuzzing Framework Based on Behavioral Constraints [19.659469020494022]
RSFuzz is a robustness-guided swarm fuzzing framework designed to detect logical vulnerabilities in multi-robot systems.<n>We construct two swarm fuzzing schemes, Single Attacker Fuzzing (SA-Fuzzing) and Multiple Attacker Fuzzing (MA-Fuzzing)<n>Results show RSFuzz outperforms the state-of-the-art with an average improvement of 17.75% in effectiveness and a 38.4% increase in efficiency.
arXiv Detail & Related papers (2024-09-07T06:46:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.