Federated Knowledge Graph Unlearning via Diffusion Model
- URL: http://arxiv.org/abs/2403.08554v1
- Date: Wed, 13 Mar 2024 14:06:51 GMT
- Title: Federated Knowledge Graph Unlearning via Diffusion Model
- Authors: Bingchen Liu and Yuanyuan Fang
- Abstract summary: Federated learning (FL) promotes the development and application of artificial intelligence technologies.
In this paper, we propose FedDM, a novel framework tailored for machine unlearning in federated knowledge graphs.
- Score: 5.373752180709173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) promotes the development and application of
artificial intelligence technologies by enabling model sharing and
collaboration while safeguarding data privacy. Knowledge graph (KG) embedding
representation provides a foundation for knowledge reasoning and applications
by mapping entities and relations into vector space. Federated KG embedding
enables the utilization of knowledge from diverse client sources while
safeguarding the privacy of local data. However, due to demands such as privacy
protection and the need to adapt to dynamic data changes, investigations into
machine unlearning (MU) have been sparked. However, it is challenging to
maintain the performance of KG embedding models while forgetting the influence
of specific forgotten data on the model. In this paper, we propose FedDM, a
novel framework tailored for machine unlearning in federated knowledge graphs.
Leveraging diffusion models, we generate noisy data to sensibly mitigate the
influence of specific knowledge on FL models while preserving the overall
performance concerning the remaining data. We conduct experimental evaluations
on benchmark datasets to assess the efficacy of the proposed model. Extensive
experiments demonstrate that FedDM yields promising results in knowledge
forgetting.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Approximate Gradient Coding for Privacy-Flexible Federated Learning with Non-IID Data [9.984630251008868]
This work focuses on the challenges of non-IID data and stragglers/dropouts in federated learning.
We introduce and explore a privacy-flexible paradigm that models parts of the clients' local data as non-private.
arXiv Detail & Related papers (2024-04-04T15:29:50Z) - KnFu: Effective Knowledge Fusion [5.305607095162403]
Federated Learning (FL) has emerged as a prominent alternative to the traditional centralized learning approach.
The paper proposes Effective Knowledge Fusion (KnFu) algorithm that evaluates knowledge of local models to only fuse semantic neighbors' effective knowledge for each client.
A key conclusion of the work is that in scenarios with large and highly heterogeneous local datasets, local training could be preferable to knowledge fusion-based solutions.
arXiv Detail & Related papers (2024-03-18T15:49:48Z) - Privacy-Enhancing Collaborative Information Sharing through Federated
Learning -- A Case of the Insurance Industry [1.8092553911119764]
The report demonstrates the benefits of harnessing the value of Federated Learning (FL) to learn a single model across multiple insurance industry datasets.
FL addresses two of the most pressing concerns: limited data volume and data variety, which are caused by privacy concerns.
During each round of FL, collaborators compute improvements on the model using their local private data, and these insights are combined to update a global model.
arXiv Detail & Related papers (2024-02-22T21:46:24Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z) - Heterogeneous Federated Knowledge Graph Embedding Learning and
Unlearning [14.063276595895049]
Federated Learning (FL) is a paradigm to train a global machine learning model across distributed clients without sharing raw data.
We propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning.
We show that FedLU achieves superior results in both link prediction and knowledge forgetting.
arXiv Detail & Related papers (2023-02-04T02:44:48Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.