Robustness and Personalization in Federated Learning: A Unified Approach
via Regularization
- URL: http://arxiv.org/abs/2009.06303v3
- Date: Tue, 12 Jul 2022 13:19:17 GMT
- Title: Robustness and Personalization in Federated Learning: A Unified Approach
via Regularization
- Authors: Achintya Kundu, Pengqian Yu, Laura Wynter, Shiau Hong Lim
- Abstract summary: We present a class of methods for robust, personalized federated learning, called Fed+.
The principal advantage of Fed+ is to better accommodate the real-world characteristics found in federated training.
We demonstrate the benefits of Fed+ through extensive experiments on benchmark datasets.
- Score: 4.7234844467506605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a class of methods for robust, personalized federated learning,
called Fed+, that unifies many federated learning algorithms. The principal
advantage of this class of methods is to better accommodate the real-world
characteristics found in federated training, such as the lack of IID data
across parties, the need for robustness to outliers or stragglers, and the
requirement to perform well on party-specific datasets. We achieve this through
a problem formulation that allows the central server to employ robust ways of
aggregating the local models while keeping the structure of local computation
intact. Without making any statistical assumption on the degree of
heterogeneity of local data across parties, we provide convergence guarantees
for Fed+ for convex and non-convex loss functions under different (robust)
aggregation methods. The Fed+ theory is also equipped to handle heterogeneous
computing environments including stragglers without additional assumptions;
specifically, the convergence results cover the general setting where the
number of local update steps across parties can vary. We demonstrate the
benefits of Fed+ through extensive experiments across standard benchmark
datasets.
Related papers
- Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - Factor-Assisted Federated Learning for Personalized Optimization with
Heterogeneous Data [6.024145412139383]
Federated learning is an emerging distributed machine learning framework aiming at protecting data privacy.
Data in different clients contain both common knowledge and personalized knowledge.
We develop a novel personalized federated learning framework for heterogeneous data, which we refer to as FedSplit.
arXiv Detail & Related papers (2023-12-07T13:05:47Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Fed-MIWAE: Federated Imputation of Incomplete Data via Deep Generative
Models [5.373862368597948]
Federated learning allows for the training of machine learning models on multiple local datasets without requiring explicit data exchange.
Data pre-processing, including strategies for handling missing data, remains a major bottleneck in real-world federated learning deployment.
We propose Fed-MIWAE, a deep latent variable model for missing data imputation based on variational autoencoders.
arXiv Detail & Related papers (2023-04-17T08:14:08Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - Federated Learning as Variational Inference: A Scalable Expectation
Propagation Approach [66.9033666087719]
This paper extends the inference view and describes a variational inference formulation of federated learning.
We apply FedEP on standard federated learning benchmarks and find that it outperforms strong baselines in terms of both convergence speed and accuracy.
arXiv Detail & Related papers (2023-02-08T17:58:11Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Weight Divergence Driven Divide-and-Conquer Approach for Optimal
Federated Learning from non-IID Data [0.0]
Federated Learning allows training of data stored in distributed devices without the need for centralizing training data.
We propose a novel Divide-and-Conquer training methodology that enables the use of the popular FedAvg aggregation algorithm.
arXiv Detail & Related papers (2021-06-28T09:34:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.