Node-Level Membership Inference Attacks Against Graph Neural Networks
- URL: http://arxiv.org/abs/2102.05429v1
- Date: Wed, 10 Feb 2021 13:51:54 GMT
- Title: Node-Level Membership Inference Attacks Against Graph Neural Networks
- Authors: Xinlei He and Rui Wen and Yixin Wu and Michael Backes and Yun Shen and
Yang Zhang
- Abstract summary: A new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.
Previous studies have shown that machine learning models are vulnerable to privacy attacks.
This paper performs the first comprehensive analysis of node-level membership inference attacks against GNNs.
- Score: 29.442045622210532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many real-world data comes in the form of graphs, such as social networks and
protein structure. To fully utilize the information contained in graph data, a
new family of machine learning (ML) models, namely graph neural networks
(GNNs), has been introduced. Previous studies have shown that machine learning
models are vulnerable to privacy attacks. However, most of the current efforts
concentrate on ML models trained on data from the Euclidean space, like images
and texts. On the other hand, privacy risks stemming from GNNs remain largely
unstudied.
In this paper, we fill the gap by performing the first comprehensive analysis
of node-level membership inference attacks against GNNs. We systematically
define the threat models and propose three node-level membership inference
attacks based on an adversary's background knowledge. Our evaluation on three
GNN structures and four benchmark datasets shows that GNNs are vulnerable to
node-level membership inference even when the adversary has minimal background
knowledge. Besides, we show that graph density and feature similarity have a
major impact on the attack's success. We further investigate two defense
mechanisms and the empirical results indicate that these defenses can reduce
the attack performance but with moderate utility loss.
Related papers
- Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Property inference attack; Graph neural networks; Privacy attacks and
defense; Trustworthy machine learning [5.598383724295497]
Machine learning models are vulnerable to privacy attacks that leak information about the training data.
In this work, we focus on a particular type of privacy attacks named property inference attack (PIA)
We consider Graph Neural Networks (GNNs) as the target model, and distribution of particular groups of nodes and links in the training graph as the target property.
arXiv Detail & Related papers (2022-09-02T14:59:37Z) - Adapting Membership Inference Attacks to GNN for Graph Classification:
Approaches and Implications [32.631077336656936]
Membership Inference Attack (MIA) against Graph Neural Networks (GNNs) raises severe privacy concerns.
We take the first step in MIA against GNNs for graph-level classification.
We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities.
arXiv Detail & Related papers (2021-10-17T08:41:21Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Structack: Structure-based Adversarial Attacks on Graph Neural Networks [1.795391652194214]
We study adversarial attacks that are uninformed, where an attacker only has access to the graph structure, but no information about node attributes.
We show that structure-based uninformed attacks can approach the performance of informed attacks, while being computationally more efficient.
We present a new attack strategy on GNNs that we refer to as Structack. Structack can successfully manipulate the performance of GNNs with very limited information while operating under tight computational constraints.
arXiv Detail & Related papers (2021-07-23T16:17:10Z) - Membership Inference Attack on Graph Neural Networks [1.6457778420360536]
We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on.
We choose the simplest possible attack model that utilizes the posteriors of the trained model.
The surprising and worrying fact is that the attack is successful even if the target model generalizes well.
arXiv Detail & Related papers (2021-01-17T02:12:35Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.