Efficient Federated Learning on Knowledge Graphs via Privacy-preserving
Relation Embedding Aggregation
- URL: http://arxiv.org/abs/2203.09553v1
- Date: Thu, 17 Mar 2022 18:32:19 GMT
- Title: Efficient Federated Learning on Knowledge Graphs via Privacy-preserving
Relation Embedding Aggregation
- Authors: Kai Zhang, Yu Wang, Hongyi Wang, Lifu Huang, Carl Yang, Lichao Sun
- Abstract summary: We propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE.
Compared to FedE, FedR achieves similar utility and significant (nearly 2X) improvements in both privacy and efficiency on link prediction task.
- Score: 35.83720721128121
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) on knowledge graphs (KGs) has yet to be as well
studied as other domains, such as computer vision and natural language
processing. A recent study FedE first proposes an FL framework that shares
entity embeddings of KGs across all clients. However, compared with model
sharing in vanilla FL, entity embedding sharing from FedE would incur severe
privacy leakage. Specifically, the known entity embedding can be used to infer
whether a specific relation between two entities exists in a private client. In
this paper, we first develop a novel attack that aims to recover the original
data based on embedding information, which is further used to evaluate the
vulnerabilities of FedE. Furthermore, we propose a Federated learning paradigm
with privacy-preserving Relation embedding aggregation (FedR) to tackle the
privacy issue in FedE. Compared to entity embedding sharing, relation embedding
sharing policy can significantly reduce the communication cost due to its
smaller size of queries. We conduct extensive experiments to evaluate FedR with
five different embedding learning models and three benchmark KG datasets.
Compared to FedE, FedR achieves similar utility and significant (nearly 2X)
improvements in both privacy and efficiency on link prediction task.
Related papers
- FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Privacy-Enhancing Collaborative Information Sharing through Federated
Learning -- A Case of the Insurance Industry [1.8092553911119764]
The report demonstrates the benefits of harnessing the value of Federated Learning (FL) to learn a single model across multiple insurance industry datasets.
FL addresses two of the most pressing concerns: limited data volume and data variety, which are caused by privacy concerns.
During each round of FL, collaborators compute improvements on the model using their local private data, and these insights are combined to update a global model.
arXiv Detail & Related papers (2024-02-22T21:46:24Z) - FedRKG: A Privacy-preserving Federated Recommendation Framework via
Knowledge Graph Enhancement [20.214339212091012]
Federated Learning (FL) has emerged as a promising approach for preserving data privacy in recommendation systems by training models locally.
Recent Graph Neural Networks (GNN) have gained popularity in recommendation tasks due to their ability to capture high-order interactions between users and items.
We propose FedRKG, a novel federated recommendation system, where a global knowledge graph (KG) is constructed and maintained on the server using publicly available item information.
arXiv Detail & Related papers (2024-01-20T02:38:21Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated Knowledge Graphs Embedding [50.35484170815679]
We propose a novel decentralized scalable learning framework, Federated Knowledge Graphs Embedding (FKGE)
FKGE exploits adversarial generation between pairs of knowledge graphs to translate identical entities and relations of different domains into near embedding spaces.
In order to protect the privacy of the training data, FKGE further implements a privacy-preserving neural network structure to guarantee no raw data leakage.
arXiv Detail & Related papers (2021-05-17T05:30:41Z) - Federated $f$-Differential Privacy [19.499120576896228]
Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information.
We introduce federated $f$-differential privacy, a new notion specifically tailored to the federated setting.
We then propose a generic private federated learning framework PriFedSync that accommodates a large family of state-of-the-art FL algorithms.
arXiv Detail & Related papers (2021-02-22T16:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.