Label-Only Membership Inference Attack against Node-Level Graph Neural
Networks
- URL: http://arxiv.org/abs/2207.13766v1
- Date: Wed, 27 Jul 2022 19:46:26 GMT
- Title: Label-Only Membership Inference Attack against Node-Level Graph Neural
Networks
- Authors: Mauro Conti, Jiaxin Li, Stjepan Picek, and Jing Xu
- Abstract summary: Graph Neural Networks (GNNs) are vulnerable to Membership Inference Attacks (MIAs)
We propose a label-only MIA against GNNs for node classification with the help of GNNs' flexible prediction mechanism.
Our attacking method achieves around 60% accuracy, precision, and Area Under the Curve (AUC) for most datasets and GNN models.
- Score: 30.137860266059004
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks
(CNNs), aggregate the message of nodes' neighbors and structure information to
acquire expressive representations of nodes for node classification, graph
classification, and link prediction. Previous studies have indicated that GNNs
are vulnerable to Membership Inference Attacks (MIAs), which infer whether a
node is in the training data of GNNs and leak the node's private information,
like the patient's disease history. The implementation of previous MIAs takes
advantage of the models' probability output, which is infeasible if GNNs only
provide the prediction label (label-only) for the input.
In this paper, we propose a label-only MIA against GNNs for node
classification with the help of GNNs' flexible prediction mechanism, e.g.,
obtaining the prediction label of one node even when neighbors' information is
unavailable. Our attacking method achieves around 60\% accuracy, precision, and
Area Under the Curve (AUC) for most datasets and GNN models, some of which are
competitive or even better than state-of-the-art probability-based MIAs
implemented under our environment and settings. Additionally, we analyze the
influence of the sampling method, model selection approach, and overfitting
level on the attack performance of our label-only MIA. Both of those factors
have an impact on the attack performance. Then, we consider scenarios where
assumptions about the adversary's additional dataset (shadow dataset) and extra
information about the target model are relaxed. Even in those scenarios, our
label-only MIA achieves a better attack performance in most cases. Finally, we
explore the effectiveness of possible defenses, including Dropout,
Regularization, Normalization, and Jumping knowledge. None of those four
defenses prevent our attack completely.
Related papers
- Rethinking Independent Cross-Entropy Loss For Graph-Structured Data [41.92169850308025]
Graph neural networks (GNNs) have exhibited prominent performance in learning graph-structured data.
In this work, we propose a new framework, termed joint-cluster supervised learning, to model the joint distribution of each node with its corresponding cluster.
In this way, the data-label reference signals extracted from the local cluster explicitly strengthen the discrimination ability on the target node.
arXiv Detail & Related papers (2024-05-24T13:52:41Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Adapting Membership Inference Attacks to GNN for Graph Classification:
Approaches and Implications [32.631077336656936]
Membership Inference Attack (MIA) against Graph Neural Networks (GNNs) raises severe privacy concerns.
We take the first step in MIA against GNNs for graph-level classification.
We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities.
arXiv Detail & Related papers (2021-10-17T08:41:21Z) - Membership Inference Attack on Graph Neural Networks [1.6457778420360536]
We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on.
We choose the simplest possible attack model that utilizes the posteriors of the trained model.
The surprising and worrying fact is that the attack is successful even if the target model generalizes well.
arXiv Detail & Related papers (2021-01-17T02:12:35Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.