FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning
- URL: http://arxiv.org/abs/2207.09653v1
- Date: Wed, 20 Jul 2022 04:55:18 GMT
- Title: FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning
- Authors: Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh
- Abstract summary: Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
- Score: 87.08902493524556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning~(FL) has recently attracted increasing attention from
academia and industry, with the ultimate goal of achieving collaborative
training under privacy and communication constraints. Existing iterative model
averaging based FL algorithms require a large number of communication rounds to
obtain a well-performed model due to extremely unbalanced and non-i.i.d data
partitioning among different clients. Thus, we propose FedDM to build the
global training objective from multiple local surrogate functions, which
enables the server to gain a more global view of the loss landscape. In detail,
we construct synthetic sets of data on each client to locally match the loss
landscape from original data through distribution matching. FedDM reduces
communication rounds and improves model quality by transmitting more
informative and smaller synthesized data compared with unwieldy model weights.
We conduct extensive experiments on three image classification datasets, and
results show that our method can outperform other FL counterparts in terms of
efficiency and model performance. Moreover, we demonstrate that FedDM can be
adapted to preserve differential privacy with Gaussian mechanism and train a
better model under the same privacy budget.
Related papers
- FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - Balancing Similarity and Complementarity for Federated Learning [91.65503655796603]
Federated Learning (FL) is increasingly important in mobile and IoT systems.
One key challenge in FL is managing statistical heterogeneity, such as non-i.i.d. data.
We introduce a novel framework, textttFedSaC, which balances similarity and complementarity in FL cooperation.
arXiv Detail & Related papers (2024-05-16T08:16:19Z) - Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data [10.64629029156029]
We introduce an innovative personalized Federated Learning framework, Multi-level Personalized Federated Learning (MuPFL)
MuPFL integrates three pivotal modules: Biased Activation Value Dropout (BAVD), Adaptive Cluster-based Model Update (ACMU) and Prior Knowledge-assisted Fine-tuning (PKCF)
Experiments on diverse real-world datasets show that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions.
arXiv Detail & Related papers (2024-05-10T11:52:53Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Optimizing the Collaboration Structure in Cross-Silo Federated Learning [43.388911479025225]
In federated learning (FL), multiple clients collaborate to train machine learning models together.
We propose FedCollab, a novel FL framework that alleviates negative transfer by clustering clients into non-overlapping coalitions.
Our results demonstrate that FedCollab effectively mitigates negative transfer across a wide range of FL algorithms and consistently outperforms other clustered FL algorithms.
arXiv Detail & Related papers (2023-06-10T18:59:50Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.