Federated Class-Incremental Learning with Prompting
- URL: http://arxiv.org/abs/2310.08948v1
- Date: Fri, 13 Oct 2023 08:35:02 GMT
- Title: Federated Class-Incremental Learning with Prompting
- Authors: Jiale Liu, Yu-Wei Zhan, Chong-Yu Zhang, Xin Luo, Zhen-Duo Chen, Yinwei
Wei, and Xin-Shun Xu
- Abstract summary: We propose a novel method called Federated Class-Incremental Learning with PrompTing.
We encode the task-relevant and task-irrelevant knowledge into prompts, preserving the old and new knowledge of the local clients.
FCI achieves significant accuracy improvements over the state-of-the-art methods.
- Score: 18.52169733483851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Web technology continues to develop, it has become increasingly common to
use data stored on different clients. At the same time, federated learning has
received widespread attention due to its ability to protect data privacy when
let models learn from data which is distributed across various clients.
However, most existing works assume that the client's data are fixed. In
real-world scenarios, such an assumption is most likely not true as data may be
continuously generated and new classes may also appear. To this end, we focus
on the practical and challenging federated class-incremental learning (FCIL)
problem. For FCIL, the local and global models may suffer from catastrophic
forgetting on old classes caused by the arrival of new classes and the data
distributions of clients are non-independent and identically distributed
(non-iid).
In this paper, we propose a novel method called Federated Class-Incremental
Learning with PrompTing (FCILPT). Given the privacy and limited memory, FCILPT
does not use a rehearsal-based buffer to keep exemplars of old data. We choose
to use prompts to ease the catastrophic forgetting of the old classes.
Specifically, we encode the task-relevant and task-irrelevant knowledge into
prompts, preserving the old and new knowledge of the local clients and solving
the problem of catastrophic forgetting. We first sort the task information in
the prompt pool in the local clients to align the task information on different
clients before global aggregation. It ensures that the same task's knowledge
are fully integrated, solving the problem of non-iid caused by the lack of
classes among different clients in the same incremental task. Experiments on
CIFAR-100, Mini-ImageNet, and Tiny-ImageNet demonstrate that FCILPT achieves
significant accuracy improvements over the state-of-the-art methods.
Related papers
- Masked Autoencoders are Parameter-Efficient Federated Continual Learners [6.184711584674839]
pMAE learns reconstructive prompt on the client side through image reconstruction using MAEs.
It reconstructs the uploaded restore information to capture the data distribution across previous tasks and different clients.
arXiv Detail & Related papers (2024-11-04T09:28:18Z) - Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt [58.880105981772324]
We propose a novel framework named Attention-aware Self-adaptive Prompt (ASP)
ASP encourages task-invariant prompts to capture shared knowledge by reducing specific information from the attention aspect.
In summary, ASP prevents overfitting on base task and does not require enormous data in few-shot incremental tasks.
arXiv Detail & Related papers (2024-03-14T20:34:53Z) - A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated
Class Incremental Learning for Vision Tasks [34.971800168823215]
This paper presents a framework for $textbffederated class incremental learning$ that utilizes a generative model to synthesize samples from past distributions.
To preserve privacy, the generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-11-13T22:21:27Z) - Don't Memorize; Mimic The Past: Federated Class Incremental Learning
Without Episodic Memory [36.4406505365313]
This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data.
The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-07-02T07:06:45Z) - Better Generative Replay for Continual Federated Learning [20.57194599280318]
Federated learning is a technique that enables a centralized server to learn from distributed clients via communications.
In this paper, we introduce the problem of continual federated learning, where clients incrementally learn new tasks and history data cannot be stored.
We propose our FedCIL model with two simple but effective solutions: model consolidation and consistency enforcement.
arXiv Detail & Related papers (2023-02-25T06:26:56Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.