Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation
- URL: http://arxiv.org/abs/2211.08068v1
- Date: Tue, 15 Nov 2022 11:44:31 GMT
- Title: Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation
- Authors: Zhihao Zhu, Chenwang Wu, Min Zhou, Hao Liao, Defu Lian, Enhong Chen
- Abstract summary: Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
- Score: 60.50994154879244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies show that Graph Neural Networks(GNNs) are vulnerable and
easily fooled by small perturbations, which has raised considerable concerns
for adapting GNNs in various safety-critical applications. In this work, we
focus on the emerging but critical attack, namely, Graph Injection Attack(GIA),
in which the adversary poisons the graph by injecting fake nodes instead of
modifying existing structures or node attributes. Inspired by findings that the
adversarial attacks are related to the increased heterophily on perturbed
graphs (the adversary tends to connect dissimilar nodes), we propose a general
defense framework CHAGNN against GIA through cooperative homophilous
augmentation of graph data and model. Specifically, the model generates
pseudo-labels for unlabeled nodes in each round of training to reduce
heterophilous edges of nodes with distinct labels. The cleaner graph is fed
back to the model, producing more informative pseudo-labels. In such an
iterative manner, model robustness is then promisingly enhanced. We present the
theoretical analysis of the effect of homophilous augmentation and provide the
guarantee of the proposal's validity. Experimental results empirically
demonstrate the effectiveness of CHAGNN in comparison with recent
state-of-the-art defense methods on diverse real-world datasets.
Related papers
- HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Universally Robust Graph Neural Networks by Preserving Neighbor
Similarity [5.660584039688214]
We introduce a novel robust model termed NSPGNN which incorporates a dual-kNN graphs pipeline to supervise the neighbor similarity-guided propagation.
Experiments on both homophilic and heterophilic graphs validate the universal robustness of NSPGNN compared to the state-of-the-art methods.
arXiv Detail & Related papers (2024-01-18T06:57:29Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Improving Robustness of Graph Neural Networks with Heterophily-Inspired
Designs [18.524164548051417]
Many graph neural networks (GNNs) are sensitive to adversarial attacks, and can suffer from performance loss if the graph structure is intentionally perturbed.
We show that in the standard scenario in which node features exhibit homophily, impactful structural attacks always lead to increased levels of heterophily.
We present two designs -- (i) separate aggregators for ego- and neighbor-embeddings, and (ii) a reduced scope of aggregation -- that can significantly improve the robustness of GNNs.
arXiv Detail & Related papers (2021-06-14T21:39:36Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.