FuzzSlice: Pruning False Positives in Static Analysis Warnings Through
Function-Level Fuzzing
- URL: http://arxiv.org/abs/2402.01923v1
- Date: Fri, 2 Feb 2024 21:49:24 GMT
- Title: FuzzSlice: Pruning False Positives in Static Analysis Warnings Through
Function-Level Fuzzing
- Authors: Aniruddhan Murali, Noble Saji Mathews, Mahmoud Alfadel, Meiyappan
Nagappan and Meng Xu
- Abstract summary: We propose FuzzSlice, a framework that automatically prunes possible false positives among static analysis warnings.
The key insight that we base our work on is that a warning that does not yield a crash when fuzzed at the function level in a given time budget is a possible false positive.
FuzzSlice reduces false positives by 62.26% in the open-source repositories and by 100% in the Juliet dataset.
- Score: 5.748423489074936
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Manual confirmation of static analysis reports is a daunting task. This is
due to both the large number of warnings and the high density of false
positives among them. Fuzzing techniques have been proposed to verify static
analysis warnings. However, a major limitation is that fuzzing the whole
project to reach all static analysis warnings is not feasible. This can take
several days and exponential machine time to increase code coverage linearly.
Therefore, we propose FuzzSlice, a novel framework that automatically prunes
possible false positives among static analysis warnings. Unlike prior work that
mostly focuses on confirming true positives among static analysis warnings,
which requires end-to-end fuzzing, FuzzSlice focuses on ruling out potential
false positives, which are the majority in static analysis reports. The key
insight that we base our work on is that a warning that does not yield a crash
when fuzzed at the function level in a given time budget is a possible false
positive. To achieve this, FuzzSlice first aims to generate compilable code
slices at the function level and then fuzzes these code slices instead of the
entire binary. FuzzSlice is also unlikely to misclassify a true bug as a false
positive because the crashing input can be reproduced by a fuzzer at the
function level as well. We evaluate FuzzSlice on the Juliet synthetic dataset
and real-world complex C projects. Our evaluation shows that the ground truth
in the Juliet dataset had 864 false positives which were all detected by
FuzzSlice. For the open-source repositories, we were able to get the developers
from two of these open-source repositories to independently label these
warnings. FuzzSlice automatically identifies 33 out of 53 false positives
confirmed by developers in these two repositories. Thus FuzzSlice reduces false
positives by 62.26% in the open-source repositories and by 100% in the Juliet
dataset.
Related papers
- Pipe-Cleaner: Flexible Fuzzing Using Security Policies [0.07499722271664144]
Pipe-Cleaner is a system for detecting and analyzing C code vulnerabilities.
It is based on flexible developer-designed security policies enforced by a tag-based runtime reference monitor.
We demonstrate the potential of this approach on several heap-related security vulnerabilities.
arXiv Detail & Related papers (2024-10-31T23:35:22Z) - FuzzCoder: Byte-level Fuzzing Test via Large Language Model [46.18191648883695]
We propose to adopt fine-tuned large language models (FuzzCoder) to learn patterns in the input files from successful attacks.
FuzzCoder can predict mutation locations and strategies locations in input files to trigger abnormal behaviors of the program.
arXiv Detail & Related papers (2024-09-03T14:40:31Z) - FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs [54.27040631527217]
We propose a novel framework called FoC to Figure out the Cryptographic functions in stripped binaries.
FoC-BinLLM outperforms ChatGPT by 14.61% on the ROUGE-L score.
FoC-Sim outperforms the previous best methods with a 52% higher Recall@1.
arXiv Detail & Related papers (2024-03-27T09:45:33Z) - FineWAVE: Fine-Grained Warning Verification of Bugs for Automated Static Analysis Tools [18.927121513404924]
Automated Static Analysis Tools (ASATs) have evolved over time to assist in detecting bugs.
Previous research efforts have explored learning-based methods to validate the reported warnings.
We propose FineWAVE, a learning-based approach that verifies bug-sensitive warnings at a fine-grained granularity.
arXiv Detail & Related papers (2024-03-24T06:21:35Z) - Benchmarking Deep Learning Fuzzers [11.118370064698869]
We run three state-of-the-art DL fuzzers, FreeFuzz, DeepRel, and DocTer, on the benchmark by following their instructions.
We find that these fuzzers are unable to detect many real bugs collected in our benchmark dataset.
Our systematic analysis further identifies four major, broad, and common factors that affect these fuzzers' ability to detect real bugs.
arXiv Detail & Related papers (2023-10-10T18:09:16Z) - Learning to Reduce False Positives in Analytic Bug Detectors [12.733531603080674]
We propose a Transformer-based learning approach to identify false positive bug warnings.
We demonstrate that our models can improve the precision of static analysis by 17.5%.
arXiv Detail & Related papers (2022-03-08T04:26:26Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z) - Sample-Efficient Safety Assurances using Conformal Prediction [57.92013073974406]
Early warning systems can provide alerts when an unsafe situation is imminent.
To reliably improve safety, these warning systems should have a provable false negative rate.
We present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics.
arXiv Detail & Related papers (2021-09-28T23:00:30Z) - Learning Stable Classifiers by Transferring Unstable Features [59.06169363181417]
We study transfer learning in the presence of spurious correlations.
We experimentally demonstrate that directly transferring the stable feature extractor learned on the source task may not eliminate these biases for the target task.
We hypothesize that the unstable features in the source task and those in the target task are directly related.
arXiv Detail & Related papers (2021-06-15T02:41:12Z) - Assessing Validity of Static Analysis Warnings using Ensemble Learning [4.05739885420409]
Static Analysis (SA) tools are used to identify potential weaknesses in code and fix them in advance, while the code is being developed.
These rules-based static analysis tools generally report a lot of false warnings along with the actual ones.
We propose a Machine Learning (ML)-based learning process that uses source codes, historic commit data, and classifier-ensembles to prioritize the True warnings.
arXiv Detail & Related papers (2021-04-21T19:39:20Z) - D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using
Differential Analysis [55.15995704119158]
We propose D2A, a differential analysis based approach to label issues reported by static analysis tools.
We use D2A to generate a large labeled dataset to train models for vulnerability identification.
arXiv Detail & Related papers (2021-02-16T07:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.