FedH2L: Federated Learning with Model and Statistical Heterogeneity
- URL: http://arxiv.org/abs/2101.11296v1
- Date: Wed, 27 Jan 2021 10:10:18 GMT
- Title: FedH2L: Federated Learning with Model and Statistical Heterogeneity
- Authors: Yiying Li, Wei Zhou, Huaimin Wang, Haibo Mi, Timothy M. Hospedales
- Abstract summary: Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
- Score: 75.61234545520611
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) enables distributed participants to collectively
learn a strong global model without sacrificing their individual data privacy.
Mainstream FL approaches require each participant to share a common network
architecture and further assume that data are are sampled IID across
participants. However, in real-world deployments participants may require
heterogeneous network architectures; and the data distribution is almost
certainly non-uniform across participants. To address these issues we introduce
FedH2L, which is agnostic to both the model architecture and robust to
different data distributions across participants. In contrast to approaches
sharing parameters or gradients, FedH2L relies on mutual distillation,
exchanging only posteriors on a shared seed set between participants in a
decentralized manner. This makes it extremely bandwidth efficient, model
agnostic, and crucially produces models capable of performing well on the whole
data distribution when learning from heterogeneous silos.
Related papers
- FedPAE: Peer-Adaptive Ensemble Learning for Asynchronous and Model-Heterogeneous Federated Learning [9.084674176224109]
Federated learning (FL) enables multiple clients with distributed data sources to collaboratively train a shared model without compromising data privacy.
We introduce Federated Peer-Adaptive Ensemble Learning (FedPAE), a fully decentralized pFL algorithm that supports model heterogeneity and asynchronous learning.
Our approach utilizes a peer-to-peer model sharing mechanism and ensemble selection to achieve a more refined balance between local and global information.
arXiv Detail & Related papers (2024-10-17T22:47:19Z) - Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors [21.931436901703634]
Conventional Federated Learning (FL) involves collaborative training of a global model while maintaining user data privacy.
One of its branches, decentralized FL, is a serverless network that allows clients to own and optimize different local models separately.
We propose a novel Decentralized FL technique by introducing Synthetic Anchors, dubbed as DeSA.
arXiv Detail & Related papers (2024-05-19T11:36:45Z) - Fed-CO2: Cooperation of Online and Offline Models for Severe Data
Heterogeneity in Federated Learning [14.914477928398133]
Federated Learning (FL) has emerged as a promising distributed learning paradigm.
The effectiveness of FL is highly dependent on the quality of the data that is being used for training.
We propose Fed-CO$_2$, a universal FL framework that handles both label distribution skew and feature skew.
arXiv Detail & Related papers (2023-12-21T15:12:12Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - VFedMH: Vertical Federated Learning for Training Multiple Heterogeneous
Models [53.30484242706966]
This paper proposes a novel approach called Vertical federated learning for training multiple Heterogeneous models (VFedMH)
To protect the participants' local embedding values, we propose an embedding protection method based on lightweight blinding factors.
Experiments are conducted to demonstrate that VFedMH can simultaneously train multiple heterogeneous models with heterogeneous optimization and outperform some recent methods in model performance.
arXiv Detail & Related papers (2023-10-20T09:22:51Z) - FedSiam-DA: Dual-aggregated Federated Learning via Siamese Network under
Non-IID Data [21.95009868875851]
Federated learning can address data island, it remains challenging to train with data heterogeneous in a real application.
We propose FedSiam-DA, a novel dual-aggregated contrastive federated learning approach.
arXiv Detail & Related papers (2022-11-17T09:05:25Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - FedRAD: Federated Robust Adaptive Distillation [7.775374800382709]
Collaborative learning framework by typically aggregating model updates is vulnerable to model poisoning attacks from adversarial clients.
We propose a novel robust aggregation method, Federated Robust Adaptive Distillation (FedRAD), to detect adversaries and robustly aggregate local models.
The results show that FedRAD outperforms all other aggregators in the presence of adversaries, as well as in heterogeneous data distributions.
arXiv Detail & Related papers (2021-12-02T16:50:57Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.