Federated Continual Learning via Knowledge Fusion: A Survey
- URL: http://arxiv.org/abs/2312.16475v1
- Date: Wed, 27 Dec 2023 08:47:39 GMT
- Title: Federated Continual Learning via Knowledge Fusion: A Survey
- Authors: Xin Yang, Hao Yu, Xin Gao, Hao Wang, Junbo Zhang and Tianrui Li
- Abstract summary: Federated Continual Learning (FCL) is an emerging paradigm to address model learning in both federated and continual learning environments.
The key objective of FCL is to fuse heterogeneous knowledge from different clients and retain knowledge of previous tasks while learning on new ones.
In this work, we delineate federated learning and continual learning first and then discuss their integration, i.e., FCL, and particular FCL via knowledge fusion.
- Score: 33.74289759536269
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Data privacy and silos are nontrivial and greatly challenging in many
real-world applications. Federated learning is a decentralized approach to
training models across multiple local clients without the exchange of raw data
from client devices to global servers. However, existing works focus on a
static data environment and ignore continual learning from streaming data with
incremental tasks. Federated Continual Learning (FCL) is an emerging paradigm
to address model learning in both federated and continual learning
environments. The key objective of FCL is to fuse heterogeneous knowledge from
different clients and retain knowledge of previous tasks while learning on new
ones. In this work, we delineate federated learning and continual learning
first and then discuss their integration, i.e., FCL, and particular FCL via
knowledge fusion. In summary, our motivations are four-fold: we (1) raise a
fundamental problem called ''spatial-temporal catastrophic forgetting'' and
evaluate its impact on the performance using a well-known method called
federated averaging (FedAvg), (2) integrate most of the existing FCL methods
into two generic frameworks, namely synchronous FCL and asynchronous FCL, (3)
categorize a large number of methods according to the mechanism involved in
knowledge fusion, and finally (4) showcase an outlook on the future work of
FCL.
Related papers
- Federated Large Language Models: Current Progress and Future Directions [63.68614548512534]
This paper surveys Federated learning for LLMs (FedLLM), highlighting recent advances and future directions.
We focus on two key aspects: fine-tuning and prompt learning in a federated setting, discussing existing work and associated research challenges.
arXiv Detail & Related papers (2024-09-24T04:14:33Z) - Personalized Federated Continual Learning via Multi-granularity Prompt [33.84680453375976]
We propose a novel concept called multi-granularity prompt, i.e., coarse-grained global prompt and fine-grained local prompt used to personalize the generalized representation.
By the exclusive fusion of coarse-grained knowledge, we achieve the transmission and refinement of common knowledge among clients, further enhancing the performance of personalization.
arXiv Detail & Related papers (2024-06-27T13:41:37Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Collaborating Heterogeneous Natural Language Processing Tasks via
Federated Learning [55.99444047920231]
The proposed ATC framework achieves significant improvements compared with various baseline methods.
We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks.
arXiv Detail & Related papers (2022-12-12T09:27:50Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Federated Continual Learning for Text Classification via Selective
Inter-client Transfer [21.419581793986378]
In this work, we combine the two paradigms: Federated Learning (FL) and Continual Learning (CL) for text classification task in cloud-edge continuum.
The objective of Federated Continual Learning (FCL) is to improve deep learning models over life time at each client by (relevant and efficient) knowledge transfer without sharing data.
Here, we address challenges in minimizing inter-client interference while knowledge sharing due to heterogeneous tasks across clients in FCL setup.
In doing so, we propose a novel framework, Federated Selective Inter-client Transfer (FedSeIT) which selectively combines model parameters of foreign clients.
arXiv Detail & Related papers (2022-10-12T11:24:13Z) - Exploring Semantic Attributes from A Foundation Model for Federated
Learning of Disjoint Label Spaces [46.59992662412557]
In this work, we consider transferring mid-level semantic knowledge (such as attribute) which is not sensitive to specific objects of interest.
We formulate a new Federated Zero-Shot Learning (FZSL) paradigm to learn mid-level semantic knowledge at multiple local clients.
To improve model discriminative ability, we propose to explore semantic knowledge augmentation from external knowledge.
arXiv Detail & Related papers (2022-08-29T10:05:49Z) - Divergence-aware Federated Self-Supervised Learning [16.025681567222477]
We introduce a generalized FedSSL framework that embraces existing SSL methods based on Siamese networks.
We then propose a new approach for model update, Federated Divergence-aware Exponential Moving Average update (FedEMA)
arXiv Detail & Related papers (2022-04-09T04:15:02Z) - A Framework of Meta Functional Learning for Regularising Knowledge
Transfer [89.74127682599898]
This work proposes a novel framework of Meta Functional Learning (MFL) by meta-learning a generalisable functional model from data-rich tasks.
The MFL computes meta-knowledge on functional regularisation generalisable to different learning tasks by which functional training on limited labelled data promotes more discriminative functions to be learned.
arXiv Detail & Related papers (2022-03-28T15:24:09Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.