Model Inversion Attacks against Graph Neural Networks
- URL: http://arxiv.org/abs/2209.07807v2
- Date: Mon, 19 Sep 2022 00:58:32 GMT
- Title: Model Inversion Attacks against Graph Neural Networks
- Authors: Zaixi Zhang, Qi Liu, Zhenya Huang, Hao Wang, Chee-Kong Lee, Enhong
Chen
- Abstract summary: We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
- Score: 65.35955643325038
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Many data mining tasks rely on graphs to model relational structures among
individuals (nodes). Since relational data are often sensitive, there is an
urgent need to evaluate the privacy risks in graph data. One famous privacy
attack against data analysis models is the model inversion attack, which aims
to infer sensitive data in the training dataset and leads to great privacy
concerns. Despite its success in grid-like domains, directly applying model
inversion attacks on non-grid domains such as graph leads to poor attack
performance. This is mainly due to the failure to consider the unique
properties of graphs. To bridge this gap, we conduct a systematic study on
model inversion attacks against Graph Neural Networks (GNNs), one of the
state-of-the-art graph analysis tools in this paper. Firstly, in the white-box
setting where the attacker has full access to the target GNN model, we present
GraphMI to infer the private training graph data. Specifically, in GraphMI, a
projected gradient module is proposed to tackle the discreteness of graph edges
and preserve the sparsity and smoothness of graph features; a graph
auto-encoder module is used to efficiently exploit graph topology, node
attributes, and target model parameters for edge inference; a random sampling
module can finally sample discrete edges. Furthermore, in the hard-label
black-box setting where the attacker can only query the GNN API and receive the
classification results, we propose two methods based on gradient estimation and
reinforcement learning (RL-GraphMI). Our experimental results show that such
defenses are not sufficiently effective and call for more advanced defenses
against privacy attacks.
Related papers
- Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks [4.011211534057715]
Graph Neural Networks (GNNs) are recognized as potent tools for processing real-world data organized in graph structures.
In inductive GNNs, which allow for the processing of graph-structured data without relying on predefined graph structures, are becoming increasingly important in a wide range of applications.
This paper identifies a new method of performing unsupervised model-stealing attacks against inductive GNNs.
arXiv Detail & Related papers (2024-05-20T18:01:15Z) - GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - EDoG: Adversarial Edge Detection For Graph Neural Networks [17.969573886307906]
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks.
Recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node or subgraph classification prediction by adding subtle perturbations.
We propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation.
arXiv Detail & Related papers (2022-12-27T20:42:36Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.