On Strengthening and Defending Graph Reconstruction Attack with Markov
Chain Approximation
- URL: http://arxiv.org/abs/2306.09104v1
- Date: Thu, 15 Jun 2023 13:00:56 GMT
- Title: On Strengthening and Defending Graph Reconstruction Attack with Markov
Chain Approximation
- Authors: Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, Bo Han
- Abstract summary: We study the first comprehensive study of graph reconstruction attack that aims to reconstruct the adjacency of nodes.
We show that a range of factors in GNNs can lead to the surprising leakage of private links.
We propose two information theory-guided mechanisms: (1) the chain-based attack method with adaptive designs for extracting more private information; (2) the chain-based defense method that sharply reduces the attack fidelity with moderate accuracy loss.
- Score: 40.21760151203987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although powerful graph neural networks (GNNs) have boosted numerous
real-world applications, the potential privacy risk is still underexplored. To
close this gap, we perform the first comprehensive study of graph
reconstruction attack that aims to reconstruct the adjacency of nodes. We show
that a range of factors in GNNs can lead to the surprising leakage of private
links. Especially by taking GNNs as a Markov chain and attacking GNNs via a
flexible chain approximation, we systematically explore the underneath
principles of graph reconstruction attack, and propose two information
theory-guided mechanisms: (1) the chain-based attack method with adaptive
designs for extracting more private information; (2) the chain-based defense
method that sharply reduces the attack fidelity with moderate accuracy loss.
Such two objectives disclose a critical belief that to recover better in
attack, you must extract more multi-aspect knowledge from the trained GNN;
while to learn safer for defense, you must forget more link-sensitive
information in training GNNs. Empirically, we achieve state-of-the-art results
on six datasets and three common GNNs. The code is publicly available at:
https://github.com/tmlr-group/MC-GRA.
Related papers
- IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks [68.6374698896505]
Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of applications.
Privacy leakage may happen when the trained GNNs are deployed and exposed to potential attackers.
We propose a principled framework named IDEA to achieve flexible and certified unlearning for GNNs.
arXiv Detail & Related papers (2024-07-28T04:59:59Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Transferable Graph Backdoor Attack [13.110473828583725]
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks.
GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features.
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
arXiv Detail & Related papers (2022-06-21T06:25:37Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - GNNGuard: Defending Graph Neural Networks against Adversarial Attacks [16.941548115261433]
We develop GNNGuard, an algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure.
GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes.
Experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average.
arXiv Detail & Related papers (2020-06-15T06:07:46Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.