A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated
Class Incremental Learning for Vision Tasks
- URL: http://arxiv.org/abs/2311.07784v2
- Date: Tue, 21 Nov 2023 08:23:31 GMT
- Title: A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated
Class Incremental Learning for Vision Tasks
- Authors: Sara Babakniya, Zalan Fabian, Chaoyang He, Mahdi Soltanolkotabi,
Salman Avestimehr
- Abstract summary: This paper presents a framework for $textbffederated class incremental learning$ that utilizes a generative model to synthesize samples from past distributions.
To preserve privacy, the generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
- Score: 34.971800168823215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models often suffer from forgetting previously learned
information when trained on new data. This problem is exacerbated in federated
learning (FL), where the data is distributed and can change independently for
each user. Many solutions are proposed to resolve this catastrophic forgetting
in a centralized setting. However, they do not apply directly to FL because of
its unique complexities, such as privacy concerns and resource limitations. To
overcome these challenges, this paper presents a framework for
$\textbf{federated class incremental learning}$ that utilizes a generative
model to synthesize samples from past distributions. This data can be later
exploited alongside the training data to mitigate catastrophic forgetting. To
preserve privacy, the generative model is trained on the server using data-free
methods at the end of each task without requesting data from clients. Moreover,
our solution does not demand the users to store old data or models, which gives
them the freedom to join/leave the training at any time. Additionally, we
introduce SuperImageNet, a new regrouping of the ImageNet dataset specifically
tailored for federated continual learning. We demonstrate significant
improvements compared to existing baselines through extensive experiments on
multiple datasets.
Related papers
- Few-Shot Class-Incremental Learning with Non-IID Decentralized Data [12.472285188772544]
Few-shot class-incremental learning is crucial for developing scalable and adaptive intelligent systems.
This paper introduces federated few-shot class-incremental learning, a decentralized machine learning paradigm.
We present a synthetic data-driven framework that leverages replay buffer data to maintain existing knowledge and facilitate the acquisition of new knowledge.
arXiv Detail & Related papers (2024-09-18T02:48:36Z) - MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - TOFU: A Task of Fictitious Unlearning for LLMs [99.92305790945507]
Large language models trained on massive corpora of data from the web can reproduce sensitive or private data raising both legal and ethical concerns.
Unlearning, or tuning models to forget information present in their training data, provides us with a way to protect private data after training.
We present TOFU, a benchmark aimed at helping deepen our understanding of unlearning.
arXiv Detail & Related papers (2024-01-11T18:57:12Z) - Don't Memorize; Mimic The Past: Federated Class Incremental Learning
Without Episodic Memory [36.4406505365313]
This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data.
The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-07-02T07:06:45Z) - Better Generative Replay for Continual Federated Learning [20.57194599280318]
Federated learning is a technique that enables a centralized server to learn from distributed clients via communications.
In this paper, we introduce the problem of continual federated learning, where clients incrementally learn new tasks and history data cannot be stored.
We propose our FedCIL model with two simple but effective solutions: model consolidation and consistency enforcement.
arXiv Detail & Related papers (2023-02-25T06:26:56Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - FedOS: using open-set learning to stabilize training in federated
learning [0.0]
Federated Learning is a new approach to train statistical models on distributed datasets without violating privacy constraints.
This report explores this new research area and performs several experiments to deepen our understanding of what these challenges are.
We present a novel approach to one of these challenges and compare it to other methods found in literature.
arXiv Detail & Related papers (2022-08-22T19:53:39Z) - FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated
Distillation [54.2658887073461]
Dealing with non-IID data is one of the most challenging problems for federated learning.
This paper studies the joint problem of non-IID and long-tailed data in federated learning and proposes a corresponding solution called Federated Ensemble Distillation with Imbalance (FEDIC)
FEDIC uses model ensemble to take advantage of the diversity of models trained on non-IID data.
arXiv Detail & Related papers (2022-04-30T06:17:36Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a
Trained Classifier [58.979104709647295]
We bridge the gap between the abundance of available data and lack of relevant data, for the future learning tasks of a trained network.
We use the available data, that may be an imbalanced subset of the original training dataset, or a related domain dataset, to retrieve representative samples.
We demonstrate that data from a related domain can be leveraged to achieve state-of-the-art performance.
arXiv Detail & Related papers (2019-12-27T02:05:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.