FedFMC: Sequential Efficient Federated Learning on Non-iid Data
- URL: http://arxiv.org/abs/2006.10937v1
- Date: Fri, 19 Jun 2020 02:36:17 GMT
- Title: FedFMC: Sequential Efficient Federated Learning on Non-iid Data
- Authors: Kavya Kopparapu, Eric Lin
- Abstract summary: FedFMC (Fork-Consolidate-Merge) is a method that forks devices into updating different global models then merges and consolidates separate models into one.
We show that FedFMC substantially improves upon earlier approaches to non-iid data in the federated learning context without using a globally shared subset of data nor increase communication costs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a mechanism for devices to update a global model without sharing data,
federated learning bridges the tension between the need for data and respect
for privacy. However, classic FL methods like Federated Averaging struggle with
non-iid data, a prevalent situation in the real world. Previous solutions are
sub-optimal as they either employ a small shared global subset of data or
greater number of models with increased communication costs. We propose FedFMC
(Fork-Merge-Consolidate), a method that dynamically forks devices into updating
different global models then merges and consolidates separate models into one.
We first show the soundness of FedFMC on simple datasets, then run several
experiments comparing against baseline approaches. These experiments show that
FedFMC substantially improves upon earlier approaches to non-iid data in the
federated learning context without using a globally shared subset of data nor
increase communication costs.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - An Efficient Virtual Data Generation Method for Reducing Communication
in Federated Learning [34.39250699866746]
A few classical schemes assume the server can extract the auxiliary information about training data of the participants from the local models to construct a central dummy dataset.
The server uses the dummy dataset to finetune aggregated global model to achieve the target test accuracy in fewer communication rounds.
In this paper, we summarize the above solutions into a data-based communication-efficient FL framework.
arXiv Detail & Related papers (2023-06-21T08:07:07Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.