GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models
- URL: http://arxiv.org/abs/2311.16139v1
- Date: Fri, 3 Nov 2023 20:26:03 GMT
- Title: GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models
- Authors: Zeyu Song and Ehsanul Kabir and Shagufta Mehnaz
- Abstract summary: This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
- Score: 3.0509197593879844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have increasingly become an indispensable tool
in learning from graph-structured data, catering to various applications
including social network analysis, recommendation systems, etc. At the heart of
these networks are the edges which are crucial in guiding GNN models'
predictions. In many scenarios, these edges represent sensitive information,
such as personal associations or financial dealings -- thus requiring privacy
assurance. However, their contributions to GNN model predictions may in turn be
exploited by the adversary to compromise their privacy. Motivated by these
conflicting requirements, this paper investigates edge privacy in contexts
where adversaries possess black-box GNN model access, restricted further by
access controls, preventing direct insights into arbitrary node outputs. In
this context, we introduce a series of privacy attacks grounded on the
message-passing mechanism of GNNs. These strategies allow adversaries to deduce
connections between two nodes not by directly analyzing the model's output for
these pairs but by analyzing the output for nodes linked to them. Our
evaluation with seven real-life datasets and four GNN architectures underlines
a significant vulnerability: even in systems fortified with access control
mechanisms, an adaptive adversary can decipher private connections between
nodes, thereby revealing potentially sensitive relationships and compromising
the confidentiality of the graph.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Unveiling the Role of Message Passing in Dual-Privacy Preservation on
GNNs [7.626349365968476]
Graph Neural Networks (GNNs) are powerful tools for learning representations on graphs, such as social networks.
Privacy-preserving GNNs have been proposed, focusing on preserving node and/or link privacy.
We propose a principled privacy-preserving GNN framework that effectively safeguards both node and link privacy.
arXiv Detail & Related papers (2023-08-25T17:46:43Z) - A Unified Framework of Graph Information Bottleneck for Robustness and
Membership Privacy [43.11374582152925]
Graph Neural Networks (GNNs) have achieved great success in modeling graph-structured data.
GNNs are vulnerable to adversarial attacks which can fool the GNN model to make desired predictions.
In this work, we study a novel problem of developing robust and membership privacy-preserving GNNs.
arXiv Detail & Related papers (2023-06-14T16:11:00Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.