GraphMI: Extracting Private Graph Data from Graph Neural Networks
- URL: http://arxiv.org/abs/2106.02820v1
- Date: Sat, 5 Jun 2021 07:07:52 GMT
- Title: GraphMI: Extracting Private Graph Data from Graph Neural Networks
- Authors: Zaixi Zhang, Qi Liu, Zhenya Huang, Hao Wang, Chengqiang Lu, Chuanren
Liu, Enhong Chen
- Abstract summary: We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
- Score: 59.05178231559796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning becomes more widely used for critical applications, the
need to study its implications in privacy turns to be urgent. Given access to
the target model and auxiliary information, the model inversion attack aims to
infer sensitive features of the training dataset, which leads to great privacy
concerns. Despite its success in grid-like domains, directly applying model
inversion techniques on non-grid domains such as graph achieves poor attack
performance due to the difficulty to fully exploit the intrinsic properties of
graphs and attributes of nodes used in Graph Neural Networks (GNN). To bridge
this gap, we present \textbf{Graph} \textbf{M}odel \textbf{I}nversion attack
(GraphMI), which aims to extract private graph data of the training graph by
inverting GNN, one of the state-of-the-art graph analysis tools. Specifically,
we firstly propose a projected gradient module to tackle the discreteness of
graph edges while preserving the sparsity and smoothness of graph features.
Then we design a graph auto-encoder module to efficiently exploit graph
topology, node attributes, and target model parameters for edge inference. With
the proposed methods, we study the connection between model inversion risk and
edge influence and show that edges with greater influence are more likely to be
recovered. Extensive experiments over several public datasets demonstrate the
effectiveness of our method. We also show that differential privacy in its
canonical form can hardly defend our attack while preserving decent utility.
Related papers
- GALA: Graph Diffusion-based Alignment with Jigsaw for Source-free Domain Adaptation [13.317620250521124]
Source-free domain adaptation is a crucial machine learning topic, as it contains numerous applications in the real world.
Recent graph neural network (GNN) approaches can suffer from serious performance decline due to domain shift and label scarcity.
We propose a novel method named Graph Diffusion-based Alignment with Jigsaw (GALA), tailored for source-free graph domain adaptation.
arXiv Detail & Related papers (2024-10-22T01:32:46Z) - OpenGraph: Towards Open Graph Foundation Models [20.401374302429627]
Graph Neural Networks (GNNs) have emerged as promising techniques for encoding structural information.
Key challenge remains: the difficulty of generalizing to unseen graph data with different properties.
We propose a novel graph foundation model, called OpenGraph, to address this challenge.
arXiv Detail & Related papers (2024-03-02T08:05:03Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - SoftEdge: Regularizing Graph Classification with Random Soft Edges [18.165965620873745]
Graph data augmentation plays a vital role in regularizing Graph Neural Networks (GNNs)
Simple edge and node manipulations can create graphs with an identical structure or indistinguishable structures to message passing GNNs but of conflict labels.
We propose SoftEdge, which assigns random weights to a portion of the edges of a given graph to construct dynamic neighborhoods over the graph.
arXiv Detail & Related papers (2022-04-21T20:12:36Z) - Finding MNEMON: Reviving Memories of Node Embeddings [39.206574462957136]
We show that an adversary can recover edges with decent accuracy by only gaining access to the node embedding matrix of the original graph.
We demonstrate the effectiveness and applicability of our graph recovery attack through extensive experiments.
arXiv Detail & Related papers (2022-04-14T13:44:26Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.