GraphTheft: Quantifying Privacy Risks in Graph Prompt Learning
- URL: http://arxiv.org/abs/2411.14718v1
- Date: Fri, 22 Nov 2024 04:10:49 GMT
- Title: GraphTheft: Quantifying Privacy Risks in Graph Prompt Learning
- Authors: Jiani Zhu, Xi Lin, Yuxin Qi, Qinghua Mao,
- Abstract summary: Graph Prompt Learning (GPL) represents an innovative approach in graph representation learning, enabling task-specific adaptations by finetuning prompts without altering the underlying pre-trained model.
Despite its growing prominence, the privacy risks inherent inPL remain unexplored.
We provide the first evaluation of privacy leakage in across three attacker capabilities: black-box attacks when capabilities as a service, and scenarios where node embeddings and prompt representations are accessible to third parties.
- Score: 1.2255617580795168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Prompt Learning (GPL) represents an innovative approach in graph representation learning, enabling task-specific adaptations by fine-tuning prompts without altering the underlying pre-trained model. Despite its growing prominence, the privacy risks inherent in GPL remain unexplored. In this study, we provide the first evaluation of privacy leakage in GPL across three attacker capabilities: black-box attacks when GPL as a service, and scenarios where node embeddings and prompt representations are accessible to third parties. We assess GPL's privacy vulnerabilities through Attribute Inference Attacks (AIAs) and Link Inference Attacks (LIAs), finding that under any capability, attackers can effectively infer the properties and relationships of sensitive nodes, and the success rate of inference on some data sets is as high as 98%. Importantly, while targeted inference attacks on specific prompts (e.g., GPF-plus) maintain high success rates, our analysis suggests that the prompt-tuning in GPL does not significantly elevate privacy risks compared to traditional GNNs. To mitigate these risks, we explored defense mechanisms, identifying that Laplacian noise perturbation can substantially reduce inference success, though balancing privacy protection with model performance remains challenging. This work highlights critical privacy risks in GPL, offering new insights and foundational directions for future privacy-preserving strategies in graph learning.
Related papers
- Learning-based Privacy-Preserving Graph Publishing Against Sensitive Link Inference Attacks [14.766917415961348]
We propose the first privacy-preserving graph structure learning framework against sensitive link inference attacks.<n>The framework, named PPGSL, can automatically learn a graph with the optimal privacy--utility trade-off.<n>The PPGSL achieves state-of-the-art privacy--utility trade-off performance and effectively thwarts various sensitive link inference attacks.
arXiv Detail & Related papers (2025-07-23T04:19:29Z) - DP-GPL: Differentially Private Graph Prompt Learning [8.885929731174492]
We propose DP-GPL for differentially private graph prompt learning based on the PATE framework.
We show that our algorithm achieves high utility at strong privacy, effectively mitigating privacy concerns.
arXiv Detail & Related papers (2025-03-13T16:58:07Z) - Prompt-based Unifying Inference Attack on Graph Neural Networks [24.85661326294946]
We propose a novel Prompt-based unifying Inference Attack framework on Graph neural networks (GNNs)
ProIA retains the crucial topological information of the graph during pre-training, enhancing the background knowledge of the inference attack model.
It then utilizes a unified prompt and introduces additional disentanglement factors in downstream attacks to adapt to task-relevant knowledge.
arXiv Detail & Related papers (2024-12-20T09:56:17Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
We propose a game-theoretic framework modeling privacy protection as a Bayesian game between defender and attacker.<n>To address strategic complexity, we represent the defender's mixed strategy as a neural network generator mapping private datasets to public representations.<n>Our approach significantly outperforms state-of-the-art methods by generating stronger attacks and achieving better privacy-utility tradeoffs.
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - Cross-Context Backdoor Attacks against Graph Prompt Learning [33.06520915998661]
Graph Prompt Learning (GPL) bridges disparities between pretraining and downstream applications to alleviate the knowledge transfer bottleneck in real-world graph learning.
backdoor poisoning effects embedded in pretrained models remain largely unexplored.
We introduce textitCrossBA, the first cross-context backdoor attack against which manipulates only the pretraining phase without requiring knowledge of downstream applications.
arXiv Detail & Related papers (2024-05-28T09:17:58Z) - GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models [3.0509197593879844]
This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
arXiv Detail & Related papers (2023-11-03T20:26:03Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Node Injection Link Stealing Attack [0.649970685896541]
We present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data.
Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
arXiv Detail & Related papers (2023-07-25T14:51:01Z) - Quantifying and Defending against Privacy Threats on Federated Knowledge
Graph Embedding [27.003706269026097]
We conduct the first holistic study of the privacy threat on Knowledge Graph Embedding (KGE) from both attack and defense perspectives.
For the attack, we quantify the privacy threat by proposing three new inference attacks, which reveal substantial privacy risk.
For the defense, we propose DP-Flames, a novel differentially private FKGE with private selection.
arXiv Detail & Related papers (2023-04-06T08:44:49Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.