FedMix: Approximation of Mixup under Mean Augmented Federated Learning
- URL: http://arxiv.org/abs/2107.00233v1
- Date: Thu, 1 Jul 2021 06:14:51 GMT
- Title: FedMix: Approximation of Mixup under Mean Augmented Federated Learning
- Authors: Tehrim Yoon, Sumin Shin, Sung Ju Hwang, Eunho Yang
- Abstract summary: Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
- Score: 60.503258658382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) allows edge devices to collectively learn a model
without directly sharing data within each device, thus preserving privacy and
eliminating the need to store data globally. While there are promising results
under the assumption of independent and identically distributed (iid) local
data, current state-of-the-art algorithms suffer from performance degradation
as the heterogeneity of local data across clients increases. To resolve this
issue, we propose a simple framework, Mean Augmented Federated Learning (MAFL),
where clients send and receive averaged local data, subject to the privacy
requirements of target applications. Under our framework, we propose a new
augmentation algorithm, named FedMix, which is inspired by a phenomenal yet
simple data augmentation method, Mixup, but does not require local raw data to
be directly shared among devices. Our method shows greatly improved performance
in the standard benchmark datasets of FL, under highly non-iid federated
settings, compared to conventional algorithms.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - FedFOR: Stateless Heterogeneous Federated Learning with First-Order
Regularization [24.32029125031383]
Federated Learning (FL) seeks to distribute model training across local clients without collecting data in a centralized data-center.
We propose a first-order approximation of the global data distribution into local objectives, which intuitively penalizes updates in the opposite direction of the global update.
Our approach does not impose unrealistic limits on the client size, enabling learning from a large number of clients as is typical in most FL applications.
arXiv Detail & Related papers (2022-09-21T17:57:20Z) - Federated Learning in Non-IID Settings Aided by Differentially Private
Synthetic Data [20.757477553095637]
Federated learning (FL) is a privacy-promoting framework that enables clients to collaboratively train machine learning models.
A major challenge in federated learning arises when the local data is heterogeneous.
We propose FedDPMS, an FL algorithm in which clients deploy variational auto-encoders to augment local datasets with data synthesized using differentially private means of latent data representations.
arXiv Detail & Related papers (2022-06-01T18:00:48Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Semi-Supervised Federated Learning with non-IID Data: Algorithm and
System Design [42.63120623012093]
Federated Learning (FL) allows edge devices (or clients) to keep data locally while simultaneously training a shared global model.
The distribution of the client's local training data is non-independent identically distributed (non-IID)
We present a robust semi-supervised FL system design, where the system aims to solve the problem of data availability and non-IID in FL.
arXiv Detail & Related papers (2021-10-26T03:41:48Z) - Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning [102.26119328920547]
Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients.
We propose a general algorithmic framework, Mime, which mitigates client drift and adapts arbitrary centralized optimization algorithms.
arXiv Detail & Related papers (2020-08-08T21:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.