FedKNOW: Federated Continual Learning with Signature Task Knowledge
Integration at Edge
- URL: http://arxiv.org/abs/2212.01738v1
- Date: Sun, 4 Dec 2022 04:03:44 GMT
- Title: FedKNOW: Federated Continual Learning with Signature Task Knowledge
Integration at Edge
- Authors: Yaxin Luopan, Rui Han, Qinglong Zhang, Chi Harold Liu, Guoren Wang
- Abstract summary: We propose FedKNOW, an accurate and scalable federated continual learning framework.
FedKNOW is a client side solution that continuously extracts and integrates the knowledge of signature tasks.
We show that FedKNOW improves model accuracy by 63.24% without increasing model training time, reduces communication cost by 34.28%, and achieves more improvements under difficult scenarios.
- Score: 35.80543542333692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of
things and are becoming an integral of our daily life. When tackling the
evolving learning tasks in real world, such as classifying different types of
objects, DNNs face the challenge to continually retrain themselves according to
the tasks on different edge devices. Federated continual learning is a
promising technique that offers partial solutions but yet to overcome the
following difficulties: the significant accuracy loss due to the limited
on-device processing, the negative knowledge transfer caused by the limited
communication of non-IID data, and the limited scalability on the tasks and
edge devices. In this paper, we propose FedKNOW, an accurate and scalable
federated continual learning framework, via a novel concept of signature task
knowledge. FedKNOW is a client side solution that continuously extracts and
integrates the knowledge of signature tasks which are highly influenced by the
current task. Each client of FedKNOW is composed of a knowledge extractor, a
gradient restorer and, most importantly, a gradient integrator. Upon training
for a new task, the gradient integrator ensures the prevention of catastrophic
forgetting and mitigation of negative knowledge transfer by effectively
combining signature tasks identified from the past local tasks and other
clients' current tasks through the global model. We implement FedKNOW in
PyTorch and extensively evaluate it against state-of-the-art techniques using
popular federated continual learning benchmarks. Extensive evaluation results
on heterogeneous edge devices show that FedKNOW improves model accuracy by
63.24% without increasing model training time, reduces communication cost by
34.28%, and achieves more improvements under difficult scenarios such as large
numbers of tasks or clients, and training different complex networks.
Related papers
- Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [79.28821338925947]
Domain-Class Incremental Learning is a realistic but challenging continual learning scenario.
To handle these diverse tasks, pre-trained Vision-Language Models (VLMs) are introduced for their strong generalizability.
This incurs a new problem: the knowledge encoded in the pre-trained VLMs may be disturbed when adapting to new tasks, compromising their inherent zero-shot ability.
Existing methods tackle it by tuning VLMs with knowledge distillation on extra datasets, which demands heavy overhead.
We propose the Distribution-aware Interference-free Knowledge Integration (DIKI) framework, retaining pre-trained knowledge of
arXiv Detail & Related papers (2024-07-07T12:19:37Z) - Sparse Training for Federated Learning with Regularized Error Correction [9.852567834643292]
Federated Learning (FL) has attracted much interest due to the significant advantages it brings to training deep neural network (DNN) models.
FLARE presents a novel sparse training approach via accumulated pulling of the updated models with regularization on the embeddings in the FL process.
The performance of FLARE is validated through extensive experiments on diverse and complex models, achieving a remarkable sparsity level (10 times and more beyond the current state-of-the-art) along with significantly improved accuracy.
arXiv Detail & Related papers (2023-12-21T12:36:53Z) - Addressing Client Drift in Federated Continual Learning with Adaptive
Optimization [10.303676184878896]
We outline a framework for performing Federated Continual Learning (FCL) by using NetTailor as a candidate continual learning approach.
We show that adaptive federated optimization can reduce the adverse impact of client drift and showcase its effectiveness on CIFAR100, MiniImagenet, and Decathlon benchmarks.
arXiv Detail & Related papers (2022-03-24T20:00:03Z) - FedGradNorm: Personalized Federated Gradient-Normalized Multi-Task
Learning [50.756991828015316]
Multi-task learning (MTL) is a novel framework to learn several tasks simultaneously with a single shared network.
We propose FedGradNorm which uses a dynamic-weighting method to normalize norms in order to balance learning speeds among different tasks.
arXiv Detail & Related papers (2022-03-24T17:43:12Z) - Fully Online Meta-Learning Without Task Boundaries [80.09124768759564]
We study how meta-learning can be applied to tackle online problems of this nature.
We propose a Fully Online Meta-Learning (FOML) algorithm, which does not require any ground truth knowledge about the task boundaries.
Our experiments show that FOML was able to learn new tasks faster than the state-of-the-art online learning methods.
arXiv Detail & Related papers (2022-02-01T07:51:24Z) - Center Loss Regularization for Continual Learning [0.0]
In general, neural networks lack the ability to learn different tasks sequentially.
Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks.
We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods.
arXiv Detail & Related papers (2021-10-21T17:46:44Z) - Multi-task Over-the-Air Federated Learning: A Non-Orthogonal
Transmission Approach [52.85647632037537]
We propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES)
Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
arXiv Detail & Related papers (2021-06-27T13:09:32Z) - Enabling Continual Learning with Differentiable Hebbian Plasticity [18.12749708143404]
Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.
catastrophic forgetting poses a grand challenge for neural networks performing such learning process.
We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity.
arXiv Detail & Related papers (2020-06-30T06:42:19Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.