Tackling the Objective Inconsistency Problem in Heterogeneous Federated
Optimization
- URL: http://arxiv.org/abs/2007.07481v1
- Date: Wed, 15 Jul 2020 05:01:23 GMT
- Title: Tackling the Objective Inconsistency Problem in Heterogeneous Federated
Optimization
- Authors: Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor
- Abstract summary: This paper provides a framework to analyze the convergence of federated heterogeneous optimization algorithms.
We propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
- Score: 93.78811018928583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated optimization, heterogeneity in the clients' local datasets and
computation speeds results in large variations in the number of local updates
performed by each client in each communication round. Naive weighted
aggregation of such models causes objective inconsistency, that is, the global
model converges to a stationary point of a mismatched objective function which
can be arbitrarily different from the true objective. This paper provides a
general framework to analyze the convergence of federated heterogeneous
optimization algorithms. It subsumes previously proposed methods such as FedAvg
and FedProx and provides the first principled understanding of the solution
bias and the convergence slowdown due to objective inconsistency. Using
insights from this analysis, we propose FedNova, a normalized averaging method
that eliminates objective inconsistency while preserving fast error
convergence.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Federated Communication-Efficient Multi-Objective Optimization [27.492821176616815]
We propose FedCMOO, a novel communication- federated multiobjective (FMOO) algorithm that improves the error convergence performance of the model compared to existing approaches.
In addition, we introduce a variant of FedCMOO that allows users to specify a gradient over the objectives in terms of a desired ratio of the final objective values.
arXiv Detail & Related papers (2024-10-21T18:09:22Z) - Asynchronous Federated Stochastic Optimization for Heterogeneous Objectives Under Arbitrary Delays [0.0]
Federated learning (FL) was recently proposed to securely train models with data held over multiple locations ("clients")
Two major challenges hindering the performance of FL algorithms are long training times caused by straggling clients, and a decline in model accuracy under non-iid local data distributions ("client drift")
We propose and analyze Asynchronous Exact Averaging (AREA), a new (sub)gradient algorithm that utilizes communication to speed up convergence and enhance scalability, and employs client memory to correct the client drift caused by variations in client update frequencies.
arXiv Detail & Related papers (2024-05-16T14:22:49Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - Federated Covariate Shift Adaptation for Missing Target Output Values [1.1374487003189466]
In this paper, we extend the most recent multi-source co-shift algorithm to the framework of federated learning.
We construct a weighted model for the target task and propose the federated co-shift adaptation algorithm which works preferably in our setting.
arXiv Detail & Related papers (2023-02-28T09:15:41Z) - Federated Learning as Variational Inference: A Scalable Expectation
Propagation Approach [66.9033666087719]
This paper extends the inference view and describes a variational inference formulation of federated learning.
We apply FedEP on standard federated learning benchmarks and find that it outperforms strong baselines in terms of both convergence speed and accuracy.
arXiv Detail & Related papers (2023-02-08T17:58:11Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Federated Learning via Posterior Averaging: A New Perspective and
Practical Algorithms [21.11885845002748]
We present an alternative perspective and formulate federated learning as a posterior inference problem.
The goal is to infer a global posterior distribution by having client devices each infer the posterior of their local data.
While exact inference is often intractable, this perspective provides a principled way to search for global optima in federated settings.
arXiv Detail & Related papers (2020-10-11T15:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.