Detecting Topology Attacks against Graph Neural Networks
- URL: http://arxiv.org/abs/2204.10072v1
- Date: Thu, 21 Apr 2022 13:08:25 GMT
- Title: Detecting Topology Attacks against Graph Neural Networks
- Authors: Senrong Xu, Yuan Yao, Liangyue Li, Wei Yang, Feng Xu, Hanghang Tong
- Abstract summary: We study the victim node detection problem under topology attacks against GNNs.
Our approach is built upon the key observation rooted in the intrinsic message passing nature of GNNs.
- Score: 39.968619861265395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have been widely used in many real applications,
and recent studies have revealed their vulnerabilities against topology
attacks. To address this issue, existing efforts have mainly been dedicated to
improving the robustness of GNNs, while little attention has been paid to the
detection of such attacks. In this work, we study the victim node detection
problem under topology attacks against GNNs. Our approach is built upon the key
observation rooted in the intrinsic message passing nature of GNNs. That is,
the neighborhood of a victim node tends to have two competing group forces,
pushing the node classification results towards the original label and the
targeted label, respectively. Based on this observation, we propose to detect
victim nodes by deliberately designing an effective measurement of the
neighborhood variance for each node. Extensive experimental results on four
real-world datasets and five existing topology attacks show the effectiveness
and efficiency of the proposed detection approach.
Related papers
- Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs [13.93535590008316]
Graph Neural Networks (GNNs) have gained popularity in numerous domains, yet they are vulnerable to backdoor attacks that can compromise their performance and ethical application.
We present a novel method to detect backdoor attacks in GNNs.
Our results show that our method can achieve high detection performance, marking a significant advancement in safeguarding GNNs against backdoor attacks.
arXiv Detail & Related papers (2024-03-26T22:41:41Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph
Neural Networks [15.116231694800787]
We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness.
These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender.
We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions.
arXiv Detail & Related papers (2022-09-13T12:46:57Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.