Concept drift detection and adaptation for federated and continual
learning
- URL: http://arxiv.org/abs/2105.13309v1
- Date: Thu, 27 May 2021 17:01:58 GMT
- Title: Concept drift detection and adaptation for federated and continual
learning
- Authors: Fernando E. Casado, Dylan Lema, Marcos F. Criado, Roberto Iglesias,
Carlos V. Regueiro, Sen\'en Barro
- Abstract summary: Smart devices can collect vast amounts of data from their environment.
This data is suitable for training machine learning models, which can significantly improve their behavior.
In this work, we present a new method, called Concept-Drift-Aware Federated Averaging.
- Score: 55.41644538483948
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Smart devices, such as smartphones, wearables, robots, and others, can
collect vast amounts of data from their environment. This data is suitable for
training machine learning models, which can significantly improve their
behavior, and therefore, the user experience. Federated learning is a young and
popular framework that allows multiple distributed devices to train deep
learning models collaboratively while preserving data privacy. Nevertheless,
this approach may not be optimal for scenarios where data distribution is
non-identical among the participants or changes over time, causing what is
known as concept drift. Little research has yet been done in this field, but
this kind of situation is quite frequent in real life and poses new challenges
to both continual and federated learning. Therefore, in this work, we present a
new method, called Concept-Drift-Aware Federated Averaging (CDA-FedAvg). Our
proposal is an extension of the most popular federated algorithm, Federated
Averaging (FedAvg), enhancing it for continual adaptation under concept drift.
We empirically demonstrate the weaknesses of regular FedAvg and prove that
CDA-FedAvg outperforms it in this type of scenario.
Related papers
- Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Evaluation and comparison of federated learning algorithms for Human
Activity Recognition on smartphones [0.5039813366558306]
Federated Learning (FL) has been introduced as a new machine learning paradigm enhancing the use of local devices.
In this paper, we propose a new FL algorithm, termed FedDist, which can modify models during training by identifying dissimilarities between neurons among the clients.
Results have shown the ability of FedDist to adapt to heterogeneous data and the capability of FL to deal with asynchronous situations.
arXiv Detail & Related papers (2022-10-30T18:47:23Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Asynchronous Federated Learning for Sensor Data with Concept Drift [17.390098048134195]
Federated learning (FL) involves multiple distributed devices jointly training a shared model.
Most of previous FL approaches assume that data on devices are fixed and stationary during the training process.
concept drift makes the learning process complicated because of the inconsistency between existing and upcoming data.
We propose a novel approach, FedConD, to detect and deal with the concept drift on local devices.
arXiv Detail & Related papers (2021-09-01T02:06:42Z) - SCEI: A Smart-Contract Driven Edge Intelligence Framework for IoT
Systems [15.796325306292134]
Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy.
Various personalized approaches have been proposed, but such approaches fail to handle underlying shifts in data distribution.
This paper presents a dynamically optimized personal deep learning scheme based on blockchain and federated learning.
arXiv Detail & Related papers (2021-03-12T02:57:05Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Federated and continual learning for classification tasks in a society
of devices [59.45414406974091]
Light Federated and Continual Consensus (LFedCon2) is a new federated and continual architecture that uses light, traditional learners.
Our method allows powerless devices (such as smartphones or robots) to learn in real time, locally, continuously, autonomously and from users.
In order to test our proposal, we have applied it in a heterogeneous community of smartphone users to solve the problem of walking recognition.
arXiv Detail & Related papers (2020-06-12T12:37:03Z) - Faster On-Device Training Using New Federated Momentum Algorithm [47.187934818456604]
Mobile crowdsensing has gained significant attention in recent years and has become a critical paradigm for emerging Internet of Things applications.
To utilize these data to train machine learning models while not compromising user opportunities, federated has become a promising solution.
arXiv Detail & Related papers (2020-02-06T04:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.