Federated Continual Learning through distillation in pervasive computing
- URL: http://arxiv.org/abs/2207.08181v1
- Date: Sun, 17 Jul 2022 13:55:20 GMT
- Title: Federated Continual Learning through distillation in pervasive computing
- Authors: Anastasiia Usmanova, Fran\c{c}ois Portet, Philippe Lalanda, German
Vega
- Abstract summary: Federated Learning has been introduced as a new machine learning paradigm enhancing the use of local devices.
Current solutions rely on the availability of large amounts of stored data at the client side in order to fine-tune the models sent by the server.
This proposal has been evaluated in the Human Activity Recognition (HAR) domain and has shown to effectively reduce the catastrophic forgetting effect.
- Score: 0.2519906683279153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning has been introduced as a new machine learning paradigm
enhancing the use of local devices. At a server level, FL regularly aggregates
models learned locally on distributed clients to obtain a more general model.
Current solutions rely on the availability of large amounts of stored data at
the client side in order to fine-tune the models sent by the server. Such
setting is not realistic in mobile pervasive computing where data storage must
be kept low and data characteristic can change dramatically. To account for
this variability, a solution is to use the data regularly collected by the
client to progressively adapt the received model. But such naive approach
exposes clients to the well-known problem of catastrophic forgetting. To
address this problem, we have defined a Federated Continual Learning approach
which is mainly based on distillation. Our approach allows a better use of
resources, eliminating the need to retrain from scratch at the arrival of new
data and reducing memory usage by limiting the amount of data to be stored.
This proposal has been evaluated in the Human Activity Recognition (HAR) domain
and has shown to effectively reduce the catastrophic forgetting effect.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Federated Continual Learning Goes Online: Uncertainty-Aware Memory Management for Vision Tasks and Beyond [13.867793835583463]
We propose an uncertainty-aware memory-based approach to solve catastrophic forgetting.
We retrieve samples with specific characteristics, and - by retraining the model on such samples - we demonstrate the potential of this approach.
arXiv Detail & Related papers (2024-05-29T09:29:39Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Don't Memorize; Mimic The Past: Federated Class Incremental Learning
Without Episodic Memory [36.4406505365313]
This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data.
The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-07-02T07:06:45Z) - Improving information retention in large scale online continual learning [99.73847522194549]
Online continual learning aims to adapt efficiently to new data while retaining existing knowledge.
Recent work suggests that information retention remains a problem in large scale OCL even when the replay buffer is unlimited.
We propose using a moving average family of methods to improve optimization for non-stationary objectives.
arXiv Detail & Related papers (2022-10-12T16:59:43Z) - Federated Learning and catastrophic forgetting in pervasive computing:
demonstration in HAR domain [0.2519906683279153]
Federated learning has been introduced as a new machine learning paradigm enhancing the use of local devices.
Current solutions rely on the availability of large amounts of stored data at the client side in order to fine-tune the models sent by the server.
The purpose of this paper is to demonstrate this problem in the mobile human activity recognition context on smartphones.
arXiv Detail & Related papers (2022-07-17T13:53:28Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Data Selection for Efficient Model Update in Federated Learning [0.07614628596146598]
We propose to reduce the amount of local data that is needed to train a global model.
We do this by splitting the model into a lower part for generic feature extraction and an upper part that is more sensitive to the characteristics of the local data.
Our experiments show that less than 1% of the local data can transfer the characteristics of the client data to the global model.
arXiv Detail & Related papers (2021-11-05T14:07:06Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.