On the Effectiveness of Function-Level Vulnerability Detectors for
Inter-Procedural Vulnerabilities
- URL: http://arxiv.org/abs/2401.09767v2
- Date: Sat, 20 Jan 2024 10:36:40 GMT
- Title: On the Effectiveness of Function-Level Vulnerability Detectors for
Inter-Procedural Vulnerabilities
- Authors: Zhen Li, Ning Wang, Deqing Zou, Yating Li, Ruqian Zhang, Shouhuai Xu,
Chao Zhang, Hai Jin
- Abstract summary: We propose a tool dubbed VulTrigger for identifying vulnerability-triggering statements across functions.
Experimental results show that VulTrigger can effectively identify vulnerability-triggering statements and inter-procedural vulnerabilities.
Our findings include: (i) inter-procedural vulnerabilities are prevalent with an average of 2.8 inter-procedural layers; and (ii) function-level vulnerability detectors are much less effective in detecting to-be-patched functions of inter-procedural vulnerabilities.
- Score: 28.57872406228216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software vulnerabilities are a major cyber threat and it is important to
detect them. One important approach to detecting vulnerabilities is to use deep
learning while treating a program function as a whole, known as function-level
vulnerability detectors. However, the limitation of this approach is not
understood. In this paper, we investigate its limitation in detecting one class
of vulnerabilities known as inter-procedural vulnerabilities, where the
to-be-patched statements and the vulnerability-triggering statements belong to
different functions. For this purpose, we create the first Inter-Procedural
Vulnerability Dataset (InterPVD) based on C/C++ open-source software, and we
propose a tool dubbed VulTrigger for identifying vulnerability-triggering
statements across functions. Experimental results show that VulTrigger can
effectively identify vulnerability-triggering statements and inter-procedural
vulnerabilities. Our findings include: (i) inter-procedural vulnerabilities are
prevalent with an average of 2.8 inter-procedural layers; and (ii)
function-level vulnerability detectors are much less effective in detecting
to-be-patched functions of inter-procedural vulnerabilities than detecting
their counterparts of intra-procedural vulnerabilities.
Related papers
- C2P-CLIP: Injecting Category Common Prompt in CLIP to Enhance Generalization in Deepfake Detection [98.34703790782254]
We introduce Category Common Prompt CLIP, which integrates the category common prompt into the text encoder to inject category-related concepts into the image encoder.
Our method achieves a 12.41% improvement in detection accuracy compared to the original CLIP, without introducing additional parameters during testing.
arXiv Detail & Related papers (2024-08-19T02:14:25Z) - VulEval: Towards Repository-Level Evaluation of Software Vulnerability Detection [14.312197590230994]
repository-level evaluation system named textbfVulEval aims at evaluating the detection performance of inter- and intra-procedural vulnerabilities simultaneously.
VulEval consists of a large-scale dataset, with a total of 4,196 CVE entries, 232,239 functions, and corresponding 4,699 repository-level source code in C/C++ programming languages.
arXiv Detail & Related papers (2024-04-24T02:16:11Z) - Enhancing Code Vulnerability Detection via Vulnerability-Preserving Data Augmentation [29.72520866016839]
Source code vulnerability detection aims to identify inherent vulnerabilities to safeguard software systems from potential attacks.
Many prior studies overlook diverse vulnerability characteristics, simplifying the problem into a binary (0-1) classification task.
FGVulDet employs multiple classifiers to discern characteristics of various vulnerability types and combines their outputs to identify the specific type of vulnerability.
FGVulDet is trained on a large-scale dataset from GitHub, encompassing five different types of vulnerabilities.
arXiv Detail & Related papers (2024-04-15T09:10:52Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - Toward Improved Deep Learning-based Vulnerability Detection [6.212044762686268]
Vulnerabilities in datasets have to be represented in a certain way, e.g., code lines, functions, or program slices within which the vulnerabilities exist.
The detectors learn how base units can be vulnerable and then predict whether other base units are vulnerable.
We have hypothesized that this focus on individual base units harms the ability of the detectors to properly detect those vulnerabilities that span multiple base units.
We present our study and a framework that can be used to help DL-based detectors toward the proper inclusion of MBU vulnerabilities.
arXiv Detail & Related papers (2024-03-05T14:57:28Z) - The Vulnerability Is in the Details: Locating Fine-grained Information of Vulnerable Code Identified by Graph-based Detectors [33.395068754566935]
VULEXPLAINER is a tool for locating vulnerability-critical code lines from coarse-level vulnerable code snippets.
It can flag the vulnerability-triggering code statements with an accuracy of around 90% against eight common C/C++ vulnerabilities.
arXiv Detail & Related papers (2024-01-05T10:15:04Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Autosploit: A Fully Automated Framework for Evaluating the
Exploitability of Security Vulnerabilities [47.748732208602355]
Autosploit is an automated framework for evaluating the exploitability of vulnerabilities.
It automatically tests the exploits on different configurations of the environment.
It is able to identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.
arXiv Detail & Related papers (2020-06-30T18:49:18Z) - $\mu$VulDeePecker: A Deep Learning-Based System for Multiclass
Vulnerability Detection [24.98991662345816]
We propose the first deep learning-based system for multiclass vulnerability detection, dubbed $mu$VulDeePecker.
The key insight underlying $mu$VulDeePecker is the concept of code attention, which can capture information that can help pinpoint types of vulnerabilities.
Experiments show that $mu$VulDeePecker is effective for multiclass vulnerability detection and that accommodating control-dependence can lead to higher detection capabilities.
arXiv Detail & Related papers (2020-01-08T01:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.