Quantifying and Defending against Privacy Threats on Federated Knowledge
Graph Embedding
- URL: http://arxiv.org/abs/2304.02932v1
- Date: Thu, 6 Apr 2023 08:44:49 GMT
- Title: Quantifying and Defending against Privacy Threats on Federated Knowledge
Graph Embedding
- Authors: Yuke Hu, Wei Liang, Ruofan Wu, Kai Xiao, Weiqiang Wang, Xiaochen Li,
Jinfei Liu, Zhan Qin
- Abstract summary: We conduct the first holistic study of the privacy threat on Knowledge Graph Embedding (KGE) from both attack and defense perspectives.
For the attack, we quantify the privacy threat by proposing three new inference attacks, which reveal substantial privacy risk.
For the defense, we propose DP-Flames, a novel differentially private FKGE with private selection.
- Score: 27.003706269026097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph Embedding (KGE) is a fundamental technique that extracts
expressive representation from knowledge graph (KG) to facilitate diverse
downstream tasks. The emerging federated KGE (FKGE) collaboratively trains from
distributed KGs held among clients while avoiding exchanging clients' sensitive
raw KGs, which can still suffer from privacy threats as evidenced in other
federated model trainings (e.g., neural networks). However, quantifying and
defending against such privacy threats remain unexplored for FKGE which
possesses unique properties not shared by previously studied models. In this
paper, we conduct the first holistic study of the privacy threat on FKGE from
both attack and defense perspectives. For the attack, we quantify the privacy
threat by proposing three new inference attacks, which reveal substantial
privacy risk by successfully inferring the existence of the KG triple from
victim clients. For the defense, we propose DP-Flames, a novel differentially
private FKGE with private selection, which offers a better privacy-utility
tradeoff by exploiting the entity-binding sparse gradient property of FKGE and
comes with a tight privacy accountant by incorporating the state-of-the-art
private selection technique. We further propose an adaptive privacy budget
allocation policy to dynamically adjust defense magnitude across the training
procedure. Comprehensive evaluations demonstrate that the proposed defense can
successfully mitigate the privacy threat by effectively reducing the success
rate of inference attacks from $83.1\%$ to $59.4\%$ on average with only a
modest utility decrease.
Related papers
- GraphTheft: Quantifying Privacy Risks in Graph Prompt Learning [1.2255617580795168]
Graph Prompt Learning (GPL) represents an innovative approach in graph representation learning, enabling task-specific adaptations by finetuning prompts without altering the underlying pre-trained model.
Despite its growing prominence, the privacy risks inherent inPL remain unexplored.
We provide the first evaluation of privacy leakage in across three attacker capabilities: black-box attacks when capabilities as a service, and scenarios where node embeddings and prompt representations are accessible to third parties.
arXiv Detail & Related papers (2024-11-22T04:10:49Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
Membership inference attacks (MIAs) expose significant privacy risks by determining whether an individual's data is in a dataset.
We propose a game-theoretic framework that models privacy protection from MIA as a Bayesian game between a defender and an attacker.
We call the defender's data sharing policy thereby obtained Bayes-Nash Generative Privacy (BNGP)
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - A Game-Theoretic Approach to Privacy-Utility Tradeoff in Sharing Genomic Summary Statistics [24.330984323956173]
We propose a game-theoretic framework for optimal privacy-utility tradeoffs in the sharing of genomic summary statistics.
Our experiments demonstrate that the proposed framework yields both stronger attacks and stronger defense strategies than the state of the art.
arXiv Detail & Related papers (2024-06-03T22:09:47Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Node Injection Link Stealing Attack [0.649970685896541]
We present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data.
Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
arXiv Detail & Related papers (2023-07-25T14:51:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Combining Stochastic Defenses to Resist Gradient Inversion: An Ablation Study [6.766058964358335]
Common defense mechanisms such as Differential Privacy (DP) or Privacy Modules (PMs) introduce randomness during computation to prevent such attacks.
This paper introduces several targeted GI attacks that leverage this principle to bypass common defense mechanisms.
arXiv Detail & Related papers (2022-08-09T13:23:29Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.