EDoG: Adversarial Edge Detection For Graph Neural Networks
- URL: http://arxiv.org/abs/2212.13607v1
- Date: Tue, 27 Dec 2022 20:42:36 GMT
- Title: EDoG: Adversarial Edge Detection For Graph Neural Networks
- Authors: Xiaojun Xu, Yue Yu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li
- Abstract summary: Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks.
Recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node or subgraph classification prediction by adding subtle perturbations.
We propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation.
- Score: 17.969573886307906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have been widely applied to different tasks such
as bioinformatics, drug design, and social networks. However, recent studies
have shown that GNNs are vulnerable to adversarial attacks which aim to mislead
the node or subgraph classification prediction by adding subtle perturbations.
Detecting these attacks is challenging due to the small magnitude of
perturbation and the discrete nature of graph data. In this paper, we propose a
general adversarial edge detection pipeline EDoG without requiring knowledge of
the attack strategies based on graph generation. Specifically, we propose a
novel graph generation approach combined with link prediction to detect
suspicious adversarial edges. To effectively train the graph generative model,
we sample several sub-graphs from the given graph data. We show that since the
number of adversarial edges is usually low in practice, with low probability
the sampled sub-graphs will contain adversarial edges based on the union bound.
In addition, considering the strong attacks which perturb a large number of
edges, we propose a set of novel features to perform outlier detection as the
preprocessing for our detection. Extensive experimental results on three
real-world graph datasets including a private transaction rule dataset from a
major company and two types of synthetic graphs with controlled properties show
that EDoG can achieve above 0.8 AUC against four state-of-the-art unseen attack
strategies without requiring any knowledge about the attack type; and around
0.85 with knowledge of the attack type. EDoG significantly outperforms
traditional malicious edge detection baselines. We also show that an adaptive
attack with full knowledge of our detection pipeline is difficult to bypass it.
Related papers
- BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - GraphAttacker: A General Multi-Task GraphAttack Framework [4.218118583619758]
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
arXiv Detail & Related papers (2021-01-18T03:06:41Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.