Federated Continual Learning via Knowledge Fusion: A Survey
- URL: http://arxiv.org/abs/2312.16475v1
- Date: Wed, 27 Dec 2023 08:47:39 GMT
- Title: Federated Continual Learning via Knowledge Fusion: A Survey
- Authors: Xin Yang, Hao Yu, Xin Gao, Hao Wang, Junbo Zhang and Tianrui Li
- Abstract summary: Federated Continual Learning (FCL) is an emerging paradigm to address model learning in both federated and continual learning environments.
The key objective of FCL is to fuse heterogeneous knowledge from different clients and retain knowledge of previous tasks while learning on new ones.
In this work, we delineate federated learning and continual learning first and then discuss their integration, i.e., FCL, and particular FCL via knowledge fusion.
- Score: 33.74289759536269
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Data privacy and silos are nontrivial and greatly challenging in many
real-world applications. Federated learning is a decentralized approach to
training models across multiple local clients without the exchange of raw data
from client devices to global servers. However, existing works focus on a
static data environment and ignore continual learning from streaming data with
incremental tasks. Federated Continual Learning (FCL) is an emerging paradigm
to address model learning in both federated and continual learning
environments. The key objective of FCL is to fuse heterogeneous knowledge from
different clients and retain knowledge of previous tasks while learning on new
ones. In this work, we delineate federated learning and continual learning
first and then discuss their integration, i.e., FCL, and particular FCL via
knowledge fusion. In summary, our motivations are four-fold: we (1) raise a
fundamental problem called ''spatial-temporal catastrophic forgetting'' and
evaluate its impact on the performance using a well-known method called
federated averaging (FedAvg), (2) integrate most of the existing FCL methods
into two generic frameworks, namely synchronous FCL and asynchronous FCL, (3)
categorize a large number of methods according to the mechanism involved in
knowledge fusion, and finally (4) showcase an outlook on the future work of
FCL.
Related papers
- Federated Continual Learning for Edge-AI: A Comprehensive Survey [28.944063155195753]
In Edge-AI, federated continual learning (FCL) has emerged as an imperative framework.
FCL aims to ensure stable and reliable performance of learning models in dynamic and distributed environments.
FCL methods based on three task characteristics: federated class continual learning, federated domain continual learning, and federated task continual learning.
arXiv Detail & Related papers (2024-11-20T22:49:28Z) - Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning [79.46570165281084]
We propose a Multi-Stage Knowledge Integration network (MulKI) to emulate the human learning process in distillation methods.
MulKI achieves this through four stages, including Eliciting Ideas, Adding New Ideas, Distinguishing Ideas, and Making Connections.
Our method demonstrates significant improvements in maintaining zero-shot capabilities while supporting continual learning across diverse downstream tasks.
arXiv Detail & Related papers (2024-11-11T07:36:19Z) - Federated Large Language Models: Current Progress and Future Directions [63.68614548512534]
This paper surveys Federated learning for LLMs (FedLLM), highlighting recent advances and future directions.
We focus on two key aspects: fine-tuning and prompt learning in a federated setting, discussing existing work and associated research challenges.
arXiv Detail & Related papers (2024-09-24T04:14:33Z) - Personalized Federated Continual Learning via Multi-granularity Prompt [33.84680453375976]
We propose a novel concept called multi-granularity prompt, i.e., coarse-grained global prompt and fine-grained local prompt used to personalize the generalized representation.
By the exclusive fusion of coarse-grained knowledge, we achieve the transmission and refinement of common knowledge among clients, further enhancing the performance of personalization.
arXiv Detail & Related papers (2024-06-27T13:41:37Z) - Collaborating Heterogeneous Natural Language Processing Tasks via
Federated Learning [55.99444047920231]
The proposed ATC framework achieves significant improvements compared with various baseline methods.
We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks.
arXiv Detail & Related papers (2022-12-12T09:27:50Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Federated Continual Learning for Text Classification via Selective
Inter-client Transfer [21.419581793986378]
In this work, we combine the two paradigms: Federated Learning (FL) and Continual Learning (CL) for text classification task in cloud-edge continuum.
The objective of Federated Continual Learning (FCL) is to improve deep learning models over life time at each client by (relevant and efficient) knowledge transfer without sharing data.
Here, we address challenges in minimizing inter-client interference while knowledge sharing due to heterogeneous tasks across clients in FCL setup.
In doing so, we propose a novel framework, Federated Selective Inter-client Transfer (FedSeIT) which selectively combines model parameters of foreign clients.
arXiv Detail & Related papers (2022-10-12T11:24:13Z) - Divergence-aware Federated Self-Supervised Learning [16.025681567222477]
We introduce a generalized FedSSL framework that embraces existing SSL methods based on Siamese networks.
We then propose a new approach for model update, Federated Divergence-aware Exponential Moving Average update (FedEMA)
arXiv Detail & Related papers (2022-04-09T04:15:02Z) - A Framework of Meta Functional Learning for Regularising Knowledge
Transfer [89.74127682599898]
This work proposes a novel framework of Meta Functional Learning (MFL) by meta-learning a generalisable functional model from data-rich tasks.
The MFL computes meta-knowledge on functional regularisation generalisable to different learning tasks by which functional training on limited labelled data promotes more discriminative functions to be learned.
arXiv Detail & Related papers (2022-03-28T15:24:09Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.