Stealing Links from Graph Neural Networks
- URL: http://arxiv.org/abs/2005.02131v2
- Date: Mon, 5 Oct 2020 20:38:16 GMT
- Title: Stealing Links from Graph Neural Networks
- Authors: Xinlei He and Jinyuan Jia and Michael Backes and Neil Zhenqiang Gong
and Yang Zhang
- Abstract summary: Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
- Score: 72.85344230133248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph data, such as chemical networks and social networks, may be deemed
confidential/private because the data owner often spends lots of resources
collecting the data or the data contains sensitive information, e.g., social
relationships. Recently, neural networks were extended to graph data, which are
known as graph neural networks (GNNs). Due to their superior performance, GNNs
have many applications, such as healthcare analytics, recommender systems, and
fraud detection. In this work, we propose the first attacks to steal a graph
from the outputs of a GNN model that is trained on the graph. Specifically,
given a black-box access to a GNN model, our attacks can infer whether there
exists a link between any pair of nodes in the graph used to train the model.
We call our attacks link stealing attacks. We propose a threat model to
systematically characterize an adversary's background knowledge along three
dimensions which in total leads to a comprehensive taxonomy of 8 different link
stealing attacks. We propose multiple novel methods to realize these 8 attacks.
Extensive experiments on 8 real-world datasets show that our attacks are
effective at stealing links, e.g., AUC (area under the ROC curve) is above 0.95
in multiple cases. Our results indicate that the outputs of a GNN model reveal
rich information about the structure of the graph used to train the model.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Model Stealing Attacks Against Inductive Graph Neural Networks [15.334336995523302]
Graph neural networks (GNNs) have been proposed to fully leverage graph data to build powerful applications.
Previous research has shown that machine learning models are prone to model stealing attacks.
This paper proposes the first model stealing attacks against inductive GNNs.
arXiv Detail & Related papers (2021-12-15T18:29:22Z) - Adapting Membership Inference Attacks to GNN for Graph Classification:
Approaches and Implications [32.631077336656936]
Membership Inference Attack (MIA) against Graph Neural Networks (GNNs) raises severe privacy concerns.
We take the first step in MIA against GNNs for graph-level classification.
We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities.
arXiv Detail & Related papers (2021-10-17T08:41:21Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Membership Inference Attack on Graph Neural Networks [1.6457778420360536]
We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on.
We choose the simplest possible attack model that utilizes the posteriors of the trained model.
The surprising and worrying fact is that the attack is successful even if the target model generalizes well.
arXiv Detail & Related papers (2021-01-17T02:12:35Z) - Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function [62.89388227354517]
Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks.
Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters.
We propose an influence-based emphefficient, direct, and restricted black-box evasion attack to emphany-layer GNNs.
arXiv Detail & Related papers (2020-09-01T03:24:51Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.