Federated Learning and catastrophic forgetting in pervasive computing:
demonstration in HAR domain
- URL: http://arxiv.org/abs/2207.08180v1
- Date: Sun, 17 Jul 2022 13:53:28 GMT
- Title: Federated Learning and catastrophic forgetting in pervasive computing:
demonstration in HAR domain
- Authors: Anastasiia Usmanova, Fran\c{c}ois Portet, Philippe Lalanda, German
Vega
- Abstract summary: Federated learning has been introduced as a new machine learning paradigm enhancing the use of local devices.
Current solutions rely on the availability of large amounts of stored data at the client side in order to fine-tune the models sent by the server.
The purpose of this paper is to demonstrate this problem in the mobile human activity recognition context on smartphones.
- Score: 0.2519906683279153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning has been introduced as a new machine learning paradigm
enhancing the use of local devices. At a server level, FL regularly aggregates
models learned locally on distributed clients to obtain a more general model.
In this way, no private data is sent over the network, and the communication
cost is reduced. However, current solutions rely on the availability of large
amounts of stored data at the client side in order to fine-tune the models sent
by the server. Such setting is not realistic in mobile pervasive computing
where data storage must be kept low and data characteristic (distribution) can
change dramatically. To account for this variability, a solution is to use the
data regularly collected by the client to progressively adapt the received
model. But such naive approach exposes clients to the well-known problem of
catastrophic forgetting. The purpose of this paper is to demonstrate this
problem in the mobile human activity recognition context on smartphones.
Related papers
- Modality Alignment Meets Federated Broadcasting [9.752555511824593]
Federated learning (FL) has emerged as a powerful approach to safeguard data privacy by training models across distributed edge devices without centralizing local data.
This paper introduces a novel FL framework leveraging modality alignment, where a text encoder resides on the server, and image encoders operate on local devices.
arXiv Detail & Related papers (2024-11-24T13:30:03Z) - Continual Adaptation of Vision Transformers for Federated Learning [26.64725725128934]
We focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts.
Existing attempts at this problem tend to impose large overheads on clients and communication channels.
We propose a novel and lightweight generation and distillation scheme to consolidate client models at the server.
arXiv Detail & Related papers (2023-06-16T17:02:12Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - Federated Continual Learning through distillation in pervasive computing [0.2519906683279153]
Federated Learning has been introduced as a new machine learning paradigm enhancing the use of local devices.
Current solutions rely on the availability of large amounts of stored data at the client side in order to fine-tune the models sent by the server.
This proposal has been evaluated in the Human Activity Recognition (HAR) domain and has shown to effectively reduce the catastrophic forgetting effect.
arXiv Detail & Related papers (2022-07-17T13:55:20Z) - Applied Federated Learning: Architectural Design for Robust and
Efficient Learning in Privacy Aware Settings [0.8454446648908585]
The classical machine learning paradigm requires the aggregation of user data in a central location.
Centralization of data poses risks, including a heightened risk of internal and external security incidents.
Federated learning with differential privacy is designed to avoid the server-side centralization pitfall.
arXiv Detail & Related papers (2022-06-02T00:30:04Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Data Selection for Efficient Model Update in Federated Learning [0.07614628596146598]
We propose to reduce the amount of local data that is needed to train a global model.
We do this by splitting the model into a lower part for generic feature extraction and an upper part that is more sensitive to the characteristics of the local data.
Our experiments show that less than 1% of the local data can transfer the characteristics of the client data to the global model.
arXiv Detail & Related papers (2021-11-05T14:07:06Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.