Heterogeneous Federated Knowledge Graph Embedding Learning and
Unlearning
- URL: http://arxiv.org/abs/2302.02069v1
- Date: Sat, 4 Feb 2023 02:44:48 GMT
- Title: Heterogeneous Federated Knowledge Graph Embedding Learning and
Unlearning
- Authors: Xiangrong Zhu and Guangyao Li and Wei Hu
- Abstract summary: Federated Learning (FL) is a paradigm to train a global machine learning model across distributed clients without sharing raw data.
We propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning.
We show that FedLU achieves superior results in both link prediction and knowledge forgetting.
- Score: 14.063276595895049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) recently emerges as a paradigm to train a global
machine learning model across distributed clients without sharing raw data.
Knowledge Graph (KG) embedding represents KGs in a continuous vector space,
serving as the backbone of many knowledge-driven applications. As a promising
combination, federated KG embedding can fully take advantage of knowledge
learned from different clients while preserving the privacy of local data.
However, realistic problems such as data heterogeneity and knowledge forgetting
still remain to be concerned. In this paper, we propose FedLU, a novel FL
framework for heterogeneous KG embedding learning and unlearning. To cope with
the drift between local optimization and global convergence caused by data
heterogeneity, we propose mutual knowledge distillation to transfer local
knowledge to global, and absorb global knowledge back. Moreover, we present an
unlearning method based on cognitive neuroscience, which combines retroactive
interference and passive decay to erase specific knowledge from local clients
and propagate to the global model by reusing knowledge distillation. We
construct new datasets for assessing realistic performance of the
state-of-the-arts. Extensive experiments show that FedLU achieves superior
results in both link prediction and knowledge forgetting.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - KnFu: Effective Knowledge Fusion [5.305607095162403]
Federated Learning (FL) has emerged as a prominent alternative to the traditional centralized learning approach.
The paper proposes Effective Knowledge Fusion (KnFu) algorithm that evaluates knowledge of local models to only fuse semantic neighbors' effective knowledge for each client.
A key conclusion of the work is that in scenarios with large and highly heterogeneous local datasets, local training could be preferable to knowledge fusion-based solutions.
arXiv Detail & Related papers (2024-03-18T15:49:48Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Handling Data Heterogeneity in Federated Learning via Knowledge
Distillation and Fusion [20.150635780778384]
Federated learning (FL) supports distributed training of a global machine learning model across multiple devices with the help of a central server.
To address the issue, we design Federated learning with global-local Knowledge Fusion scheme.
Key idea in FedKF is to let the server return the global knowledge to be fused with the local knowledge in each training round.
arXiv Detail & Related papers (2022-07-23T07:20:22Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Global Knowledge Distillation in Federated Learning [3.7311680121118345]
We propose a novel global knowledge distillation method, named FedGKD, which learns the knowledge from past global models to tackle down the local bias training problem.
To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on various CV datasets (CIFAR-10/100) and settings (non-i.i.d data)
arXiv Detail & Related papers (2021-06-30T18:14:24Z) - Preservation of the Global Knowledge by Not-True Self Knowledge
Distillation in Federated Learning [8.474470736998136]
In Federated Learning (FL), a strong global model is collaboratively learned by aggregating the clients' locally trained models.
We observe that fitting on biased local distribution shifts the feature on global distribution and results in forgetting of global knowledge.
We propose a simple yet effective framework Federated Local Self-Distillation (FedLSD), which utilizes the global knowledge on locally available data.
arXiv Detail & Related papers (2021-06-06T11:51:47Z) - Data-Free Knowledge Distillation for Heterogeneous Federated Learning [31.364314540525218]
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.
Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users.
We propose a data-free knowledge distillation approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner.
arXiv Detail & Related papers (2021-05-20T22:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.