Does Black-box Attribute Inference Attacks on Graph Neural Networks
Constitute Privacy Risk?
- URL: http://arxiv.org/abs/2306.00578v1
- Date: Thu, 1 Jun 2023 11:49:43 GMT
- Title: Does Black-box Attribute Inference Attacks on Graph Neural Networks
Constitute Privacy Risk?
- Authors: Iyiola E. Olatunji, Anmar Hizber, Oliver Sihlovec, Megha Khosla
- Abstract summary: Graph neural networks (GNNs) have shown promising results on real-life datasets and applications, including healthcare, finance, and education.
Recent studies have shown that GNNs are highly vulnerable to attacks such as membership inference attack and link reconstruction attack.
We initiate the first investigation into attribute inference attack where an attacker aims to infer the sensitive user attributes based on her public or non-sensitive attributes.
- Score: 0.38581147665516596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have shown promising results on real-life
datasets and applications, including healthcare, finance, and education.
However, recent studies have shown that GNNs are highly vulnerable to attacks
such as membership inference attack and link reconstruction attack.
Surprisingly, attribute inference attacks has received little attention. In
this paper, we initiate the first investigation into attribute inference attack
where an attacker aims to infer the sensitive user attributes based on her
public or non-sensitive attributes. We ask the question whether black-box
attribute inference attack constitutes a significant privacy risk for
graph-structured data and their corresponding GNN model. We take a systematic
approach to launch the attacks by varying the adversarial knowledge and
assumptions. Our findings reveal that when an attacker has black-box access to
the target model, GNNs generally do not reveal significantly more information
compared to missing value estimation techniques. Code is available.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Structack: Structure-based Adversarial Attacks on Graph Neural Networks [1.795391652194214]
We study adversarial attacks that are uninformed, where an attacker only has access to the graph structure, but no information about node attributes.
We show that structure-based uninformed attacks can approach the performance of informed attacks, while being computationally more efficient.
We present a new attack strategy on GNNs that we refer to as Structack. Structack can successfully manipulate the performance of GNNs with very limited information while operating under tight computational constraints.
arXiv Detail & Related papers (2021-07-23T16:17:10Z) - Node-Level Membership Inference Attacks Against Graph Neural Networks [29.442045622210532]
A new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.
Previous studies have shown that machine learning models are vulnerable to privacy attacks.
This paper performs the first comprehensive analysis of node-level membership inference attacks against GNNs.
arXiv Detail & Related papers (2021-02-10T13:51:54Z) - Membership Inference Attack on Graph Neural Networks [1.6457778420360536]
We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on.
We choose the simplest possible attack model that utilizes the posteriors of the trained model.
The surprising and worrying fact is that the attack is successful even if the target model generalizes well.
arXiv Detail & Related papers (2021-01-17T02:12:35Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Backdoor Attacks to Graph Neural Networks [73.56867080030091]
We propose the first backdoor attack to graph neural networks (GNN)
In our backdoor attack, a GNN predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.
Our empirical results show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs.
arXiv Detail & Related papers (2020-06-19T14:51:01Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.