On Knowledge Editing in Federated Learning: Perspectives, Challenges,
and Future Directions
- URL: http://arxiv.org/abs/2306.01431v1
- Date: Fri, 2 Jun 2023 10:42:47 GMT
- Title: On Knowledge Editing in Federated Learning: Perspectives, Challenges,
and Future Directions
- Authors: Leijie Wu, Song Guo, Junxiao Wang, Zicong Hong, Jie Zhang, Jingren
Zhou
- Abstract summary: We present an extensive survey on the topic of knowledge editing (augmentation/removal) in Federated Learning.
We introduce an integrated paradigm, referred to as Federated Editable Learning (FEL), by reevaluating the entire lifecycle of FL.
- Score: 36.733628606028184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Federated Learning (FL) has gained increasing attention, it has become
widely acknowledged that straightforwardly applying stochastic gradient descent
(SGD) on the overall framework when learning over a sequence of tasks results
in the phenomenon known as ``catastrophic forgetting''. Consequently, much FL
research has centered on devising federated increasing learning methods to
alleviate forgetting while augmenting knowledge. On the other hand, forgetting
is not always detrimental. The selective amnesia, also known as federated
unlearning, which entails the elimination of specific knowledge, can address
privacy concerns and create additional ``space'' for acquiring new knowledge.
However, there is a scarcity of extensive surveys that encompass recent
advancements and provide a thorough examination of this issue. In this
manuscript, we present an extensive survey on the topic of knowledge editing
(augmentation/removal) in Federated Learning, with the goal of summarizing the
state-of-the-art research and expanding the perspective for various domains.
Initially, we introduce an integrated paradigm, referred to as Federated
Editable Learning (FEL), by reevaluating the entire lifecycle of FL. Secondly,
we provide a comprehensive overview of existing methods, evaluate their
position within the proposed paradigm, and emphasize the current challenges
they face. Lastly, we explore potential avenues for future research and
identify unresolved issues.
Related papers
- Federated Learning with New Knowledge: Fundamentals, Advances, and
Futures [69.8830772538421]
This paper systematically defines the main sources of new knowledge in Federated Learning (FL)
We examine the impact of the form and timing of new knowledge arrival on the incorporation process.
We discuss the potential future directions for FL with new knowledge, considering a variety of factors such as scenario setups, efficiency, and security.
arXiv Detail & Related papers (2024-02-03T21:29:31Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z) - Online Continual Knowledge Learning for Language Models [3.654507524092343]
Large Language Models (LLMs) serve as repositories of extensive world knowledge, enabling them to perform tasks such as question-answering and fact-checking.
Online Continual Knowledge Learning (OCKL) aims to manage the dynamic nature of world knowledge in LMs under real-time constraints.
arXiv Detail & Related papers (2023-11-16T07:31:03Z) - Federated Learning for Generalization, Robustness, Fairness: A Survey
and Benchmark [55.898771405172155]
Federated learning has emerged as a promising paradigm for privacy-preserving collaboration among different parties.
We provide a systematic overview of the important and recent developments of research on federated learning.
arXiv Detail & Related papers (2023-11-12T06:32:30Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning [58.107474025048866]
Forgetting refers to the loss or deterioration of previously acquired knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - Causal Reinforcement Learning: A Survey [57.368108154871]
Reinforcement learning is an essential paradigm for solving sequential decision problems under uncertainty.
One of the main obstacles is that reinforcement learning agents lack a fundamental understanding of the world.
Causality offers a notable advantage as it can formalize knowledge in a systematic manner.
arXiv Detail & Related papers (2023-07-04T03:00:43Z) - Towards Continual Knowledge Learning of Language Models [11.000501711652829]
Large Language Models (LMs) are known to encode world knowledge in their parameters as they pretrain on a vast amount of web corpus.
In real-world scenarios, the world knowledge stored in the LMs can quickly become outdated as the world changes.
We formulate a new continual learning (CL) problem called Continual Knowledge Learning (CKL)
arXiv Detail & Related papers (2021-10-07T07:00:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.