Coca: Improving and Explaining Graph Neural Network-Based Vulnerability
Detection Systems
- URL: http://arxiv.org/abs/2401.14886v1
- Date: Fri, 26 Jan 2024 14:14:52 GMT
- Title: Coca: Improving and Explaining Graph Neural Network-Based Vulnerability
Detection Systems
- Authors: Sicong Cao, Xiaobing Sun, Xiaoxue Wu, David Lo, Lili Bo, Bin Li, Wei
Liu
- Abstract summary: Graph Neural Network (GNN)-based vulnerability detection systems have achieved remarkable success.
The lack of explainability poses a critical challenge to deploy black-box models in security-related domains.
We propose Coca, a general framework aiming to enhance the robustness of existing GNN-based vulnerability detection models.
- Score: 16.005996517940964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Graph Neural Network (GNN)-based vulnerability detection systems
have achieved remarkable success. However, the lack of explainability poses a
critical challenge to deploy black-box models in security-related domains. For
this reason, several approaches have been proposed to explain the decision
logic of the detection model by providing a set of crucial statements
positively contributing to its predictions. Unfortunately, due to the
weakly-robust detection models and suboptimal explanation strategy, they have
the danger of revealing spurious correlations and redundancy issue.
In this paper, we propose Coca, a general framework aiming to 1) enhance the
robustness of existing GNN-based vulnerability detection models to avoid
spurious explanations; and 2) provide both concise and effective explanations
to reason about the detected vulnerabilities. \sysname consists of two core
parts referred to as Trainer and Explainer. The former aims to train a
detection model which is robust to random perturbation based on combinatorial
contrastive learning, while the latter builds an explainer to derive crucial
code statements that are most decisive to the detected vulnerability via
dual-view causal inference as explanations. We apply Coca over three typical
GNN-based vulnerability detectors. Experimental results show that Coca can
effectively mitigate the spurious correlation issue, and provide more useful
high-quality explanations.
Related papers
- Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation [41.831831628421675]
Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection.
We propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection.
arXiv Detail & Related papers (2024-04-24T06:52:53Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv Detail & Related papers (2023-09-11T16:10:12Z) - The Devil is in the Conflict: Disentangled Information Graph Neural
Networks for Fraud Detection [17.254383007779616]
We argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
We propose a simple and effective method that uses the attention mechanism to adaptively fuse two views.
Our model can significantly outperform stateof-the-art baselines on real-world fraud detection datasets.
arXiv Detail & Related papers (2022-10-22T08:21:49Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.