FedOS: using open-set learning to stabilize training in federated
learning
- URL: http://arxiv.org/abs/2208.11512v1
- Date: Mon, 22 Aug 2022 19:53:39 GMT
- Title: FedOS: using open-set learning to stabilize training in federated
learning
- Authors: Mohamad Mohamad, Julian Neubert, Juan Segundo Ayardo
- Abstract summary: Federated Learning is a new approach to train statistical models on distributed datasets without violating privacy constraints.
This report explores this new research area and performs several experiments to deepen our understanding of what these challenges are.
We present a novel approach to one of these challenges and compare it to other methods found in literature.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning is a recent approach to train statistical models on
distributed datasets without violating privacy constraints. The data locality
principle is preserved by sharing the model instead of the data between clients
and the server. This brings many advantages but also poses new challenges. In
this report, we explore this new research area and perform several experiments
to deepen our understanding of what these challenges are and how different
problem settings affect the performance of the final model. Finally, we present
a novel approach to one of these challenges and compare it to other methods
found in literature.
Related papers
- Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning [50.382793324572845]
Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy.
In this paper, we analyze a new method that incorporates the ideas of using data similarity and clients sampling.
To address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method.
arXiv Detail & Related papers (2024-09-22T00:49:10Z) - Federated Continual Learning Goes Online: Uncertainty-Aware Memory Management for Vision Tasks and Beyond [13.867793835583463]
We propose an uncertainty-aware memory-based approach to solve catastrophic forgetting.
We retrieve samples with specific characteristics, and - by retraining the model on such samples - we demonstrate the potential of this approach.
arXiv Detail & Related papers (2024-05-29T09:29:39Z) - Variational Bayes for Federated Continual Learning [38.11883572425234]
We introduce Federated Bayesian Neural Network (FedBNN), a versatile and efficacious framework employing a variational Bayesian neural network across all clients.
Our method continually integrates knowledge from local and historical data distributions into a single model, adeptly learning from new data distributions while retaining performance on historical distributions.
arXiv Detail & Related papers (2024-05-23T08:09:21Z) - A review on different techniques used to combat the non-IID and
heterogeneous nature of data in FL [0.0]
Federated Learning (FL) is a machine-learning approach enabling collaborative model training across multiple edge devices.
The significance of FL is particularly pronounced in industries such as healthcare and finance, where data privacy holds paramount importance.
This report delves into the issues arising from non-IID and heterogeneous data and explores current algorithms designed to address these challenges.
arXiv Detail & Related papers (2024-01-01T16:34:00Z) - A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated
Class Incremental Learning for Vision Tasks [34.971800168823215]
This paper presents a framework for $textbffederated class incremental learning$ that utilizes a generative model to synthesize samples from past distributions.
To preserve privacy, the generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-11-13T22:21:27Z) - A Comprehensive Study on Model Initialization Techniques Ensuring
Efficient Federated Learning [0.0]
Federated learning(FL) has emerged as a promising paradigm for training machine learning models in a distributed and privacy-preserving manner.
The choice of methods used for models plays a crucial role in the performance, convergence speed, communication efficiency, privacy guarantees of federated learning systems.
Our research meticulously compares, categorizes, and delineates the merits and demerits of each technique, examining their applicability across diverse FL scenarios.
arXiv Detail & Related papers (2023-10-31T23:26:58Z) - Towards Federated Long-Tailed Learning [76.50892783088702]
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
Recent attempts have been launched to, on one side, address the problem of learning from pervasive private data, and on the other side, learn from long-tailed data.
This paper focuses on learning with long-tailed (LT) data distributions under the context of the popular privacy-preserved federated learning (FL) framework.
arXiv Detail & Related papers (2022-06-30T02:34:22Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - On Covariate Shift of Latent Confounders in Imitation and Reinforcement
Learning [69.48387059607387]
We consider the problem of using expert data with unobserved confounders for imitation and reinforcement learning.
We analyze the limitations of learning from confounded expert data with and without external reward.
We validate our claims empirically on challenging assistive healthcare and recommender system simulation tasks.
arXiv Detail & Related papers (2021-10-13T07:31:31Z) - Concept drift detection and adaptation for federated and continual
learning [55.41644538483948]
Smart devices can collect vast amounts of data from their environment.
This data is suitable for training machine learning models, which can significantly improve their behavior.
In this work, we present a new method, called Concept-Drift-Aware Federated Averaging.
arXiv Detail & Related papers (2021-05-27T17:01:58Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.