ColorGo: Directed Concolic Execution
- URL: http://arxiv.org/abs/2505.21130v1
- Date: Tue, 27 May 2025 12:46:11 GMT
- Title: ColorGo: Directed Concolic Execution
- Authors: Jia Li, Jiacheng Shen, Yuxin Su, Michael R. Lyu,
- Abstract summary: directed fuzzing is a critical technique in cybersecurity, targeting specific sections of a program.<n>Current directed fuzzing methods exhibit a trade-off between efficiency and effectiveness.<n>We present ColorGo, a new directed whitebox fuzzer that concretely executes the instrumented program with constraint-solving capability.
- Score: 40.91007243855959
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Directed fuzzing is a critical technique in cybersecurity, targeting specific sections of a program. This approach is essential in various security-related domains such as crash reproduction, patch testing, and vulnerability detection. Despite its importance, current directed fuzzing methods exhibit a trade-off between efficiency and effectiveness. For instance, directed grey-box fuzzing, while efficient in generating fuzzing inputs, lacks sufficient precision. The low precision causes time wasted on executing code that cannot help reach the target site. Conversely, interpreter- or observer-based directed symbolic execution can produce high-quality inputs while incurring non-negligible runtime overhead. These limitations undermine the feasibility of directed fuzzers in real-world scenarios. To kill the birds of efficiency and effectiveness with one stone, in this paper, we involve compilation-based concolic execution into directed fuzzing and present ColorGo, achieving high scalability while preserving the high precision from symbolic execution. ColorGo is a new directed whitebox fuzzer that concretely executes the instrumented program with constraint-solving capability on generated input. It guides the exploration by \textit{incremental coloration}, including static reachability analysis and dynamic feasibility analysis. We evaluated ColorGo on diverse real-world programs and demonstrated that ColorGo outperforms AFLGo by up to \textbf{100x} in reaching target sites and reproducing target crashes.
Related papers
- SkillJect: Automating Stealthy Skill-Based Prompt Injection for Coding Agents with Trace-Driven Closed-Loop Refinement [120.52289344734415]
We propose an automated framework for stealthy prompt injection tailored to agent skills.<n>The framework forms a closed loop with three agents: an Attack Agent that synthesizes injection skills under explicit stealth constraints, a Code Agent that executes tasks using the injected skills and an Evaluate Agent that logs action traces.<n>Our method consistently achieves high attack success rates under realistic settings.
arXiv Detail & Related papers (2026-02-15T16:09:48Z) - Unbiased Gradient Estimation for Event Binning via Functional Backpropagation [64.88399635309918]
We propose a novel framework for unbiased gradient estimation of arbitrary binning functions by synthesizing weak derivatives during backpropagation.<n>We achieve 9.4% lower EPE in self-supervised optical flow, and 5.1% lower RMS error in SLAM, demonstrating broad benefits for event-based visual perception.
arXiv Detail & Related papers (2026-02-13T04:05:03Z) - Rust and Go directed fuzzing with LibAFL-DiFuzz [0.0]
We present a novel approach to directed fuzzing tailored specifically for Rust and Go applications.<n>Our implemented fuzzing tools, based on LibAFL-DiFuzz backend, demonstrate competitive advantages.
arXiv Detail & Related papers (2026-01-30T09:52:50Z) - Enhancing Fuzz Testing Efficiency through Automated Fuzz Target Generation [0.0]
We introduce an approach to improving fuzz target generation through static analysis of library source code.<n>Our findings are demonstrated through the application of this approach to the generation of fuzz targets for C/C++ libraries.
arXiv Detail & Related papers (2026-01-17T09:08:11Z) - FuzzRDUCC: Fuzzing with Reconstructed Def-Use Chain Coverage [6.827408090670258]
Binary-only fuzzing often struggles with achieving thorough code coverage and uncovering hidden vulnerabilities.<n>We introduce FuzzRDUCC, a novel fuzzing framework that employs symbolic execution to reconstruct definition-use (def-use) chains directly from binary executables.
arXiv Detail & Related papers (2025-09-05T09:47:34Z) - Locus: Agentic Predicate Synthesis for Directed Fuzzing [20.533963203761115]
directed fuzzing aims to find program inputs that lead to specified target program states.<n>Existing approaches rely on branch distances or manually-specified constraints to guide the search.<n>We present Locus, a novel framework to improve the efficiency of directed fuzzing.
arXiv Detail & Related papers (2025-08-29T01:47:07Z) - Hybrid Approach to Directed Fuzzing [0.0]
We propose a hybrid approach to directed fuzzing with novel seed scheduling algorithm.<n>We implement our approach in Sydr-Fuzz tool using LibAFL-DiFuzz as directed fuzzer and Sydr as dynamic symbolic executor.
arXiv Detail & Related papers (2025-07-07T10:29:16Z) - Improving Black-Box Generative Attacks via Generator Semantic Consistency [51.470649503929344]
generative attacks produce adversarial examples in a single forward pass at test time.<n>We enforce semantic consistency by aligning the early generator's intermediate features to an EMA teacher.<n>Our approach can be seamlessly integrated into existing generative attacks with consistent improvements in black-box transfer.
arXiv Detail & Related papers (2025-06-23T02:35:09Z) - Exposing Go's Hidden Bugs: A Novel Concolic Framework [2.676686591720132]
We present Zorya, a novel methodology to evaluate Go programs comprehensively.<n>By systematically exploring execution paths to uncover vulnerabilities beyond conventional testing, symbolic execution offers distinct advantages.<n>Our solution employs Ghidra's P-Code as an intermediate representation (IR)
arXiv Detail & Related papers (2025-05-26T16:26:20Z) - Directed Greybox Fuzzing via Large Language Model [5.667013605202579]
HGFuzzer is an automatic framework that transforms path constraint problems into targeted code generation tasks.<n>We evaluate HGFuzzer on 20 real-world vulnerabilities, successfully triggering 17, including 11 within the first minute.<n>HGFuzzer discovered 9 previously unknown vulnerabilities, all of which were assigned CVE IDs.
arXiv Detail & Related papers (2025-05-06T11:04:07Z) - Large Language Model assisted Hybrid Fuzzing [8.603235938006632]
We show how to achieve the effect of concolic execution without having to compute and solve symbolic path constraints.<n>A Large Language Model (LLM) is used as a solver to generate the modified input for reaching the desired branches.
arXiv Detail & Related papers (2024-12-20T14:23:25Z) - FuzzDistill: Intelligent Fuzzing Target Selection using Compile-Time Analysis and Machine Learning [0.0]
I present FuzzDistill, an approach that harnesses compile-time data and machine learning to refine fuzzing targets.<n>I demonstrate the efficacy of my approach through experiments conducted on real-world software, demonstrating substantial reductions in testing time.
arXiv Detail & Related papers (2024-12-11T04:55:58Z) - Stanceformer: Target-Aware Transformer for Stance Detection [59.69858080492586]
Stance Detection involves discerning the stance expressed in a text towards a specific subject or target.
Prior works have relied on existing transformer models that lack the capability to prioritize targets effectively.
We introduce Stanceformer, a target-aware transformer model that incorporates enhanced attention towards the targets during both training and inference.
arXiv Detail & Related papers (2024-10-09T17:24:28Z) - PrescientFuzz: A more effective exploration approach for grey-box fuzzing [0.45053464397400894]
We produce an augmented version of LibAFL's fuzzbench' fuzzer, called PrescientFuzz, that makes use of semantic information from the target program's control flow graph (CFG)<n>We develop an input corpus scheduler that prioritises the selection of inputs for mutation based on the proximity of their execution path to uncovered edges.
arXiv Detail & Related papers (2024-04-29T17:21:18Z) - Camouflage is all you need: Evaluating and Enhancing Language Model
Robustness Against Camouflage Adversarial Attacks [53.87300498478744]
Adversarial attacks represent a substantial challenge in Natural Language Processing (NLP)
This study undertakes a systematic exploration of this challenge in two distinct phases: vulnerability evaluation and resilience enhancement.
Results suggest a trade-off between performance and robustness, with some models maintaining similar performance while gaining robustness.
arXiv Detail & Related papers (2024-02-15T10:58:22Z) - Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data [96.92837098305898]
Black-box attacks aim to craft adversarial perturbations by querying input-output pairs of machine learning models.
Black-box attacks often suffer from the issue of query inefficiency due to the high dimensionality of the input space.
We propose a novel technique called the spanning attack, which constrains adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset.
arXiv Detail & Related papers (2020-05-11T05:57:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.