Federated Learning via Inexact ADMM
- URL: http://arxiv.org/abs/2204.10607v4
- Date: Sun, 24 Sep 2023 09:19:34 GMT
- Title: Federated Learning via Inexact ADMM
- Authors: Shenglong Zhou and Geoffrey Ye Li
- Abstract summary: In this paper, we develop an inexact alternating direction method of multipliers (ADMM)
It is both- and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions.
It has a high numerical performance compared with several state-of-the-art algorithms for federated learning.
- Score: 46.99210047518554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the crucial issues in federated learning is how to develop efficient
optimization algorithms. Most of the current ones require full device
participation and/or impose strong assumptions for convergence. Different from
the widely-used gradient descent-based algorithms, in this paper, we develop an
inexact alternating direction method of multipliers (ADMM), which is both
computation- and communication-efficient, capable of combating the stragglers'
effect, and convergent under mild conditions. Furthermore, it has a high
numerical performance compared with several state-of-the-art algorithms for
federated learning.
Related papers
- Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Federated Compositional Deep AUC Maximization [58.25078060952361]
We develop a novel federated learning method for imbalanced data by directly optimizing the area under curve (AUC) score.
To the best of our knowledge, this is the first work to achieve such favorable theoretical results.
arXiv Detail & Related papers (2023-04-20T05:49:41Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Faster Adaptive Momentum-Based Federated Methods for Distributed
Composition Optimization [14.579475552088692]
We propose a class of faster federated composition optimization algorithms (i.e. MFCGD and AdaMFCGD) to solve the non distributed composition problems.
In particular, our adaptive algorithm (i.e., AdaMFCGD) uses a unified adaptive matrix to flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-03T15:17:04Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Exact Penalty Method for Federated Learning [34.70820239954457]
Federated learning has burgeoned recently in machine learning, giving rise to a variety of research topics.
In this paper, we deploy an exact penalty method to deal with federated learning and propose an algorithm, FedEPM, that enables to tackle four critical issues in federated learning.
It is proven to be convergent and testified to have high numerical performance.
arXiv Detail & Related papers (2022-08-23T23:33:38Z) - FedGiA: An Efficient Hybrid Algorithm for Federated Learning [39.777714678378736]
Federated learning has shown its advances recently but is still facing many challenges.
We propose a hybrid federated learning algorithm (FedGiA) that combines the gradient descent and the inexact alternating direction method of multipliers.
arXiv Detail & Related papers (2022-05-03T11:56:33Z) - Communication-Efficient Stochastic Zeroth-Order Optimization for
Federated Learning [28.65635956111857]
Federated learning (FL) enables edge devices to collaboratively train a global model without sharing their private data.
To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order computation to first-order methods.
arXiv Detail & Related papers (2022-01-24T08:56:06Z) - Fast Convergence Algorithm for Analog Federated Learning [30.399830943617772]
We propose an AirComp-based FedSplit algorithm for efficient analog federated learning over wireless channels.
We prove that the proposed algorithm linearly converges to the optimal solutions under the assumption that the objective function is strongly convex and smooth.
Our algorithm is theoretically and experimentally verified to be much more robust to the ill-conditioned problems with faster convergence compared with other benchmark FL algorithms.
arXiv Detail & Related papers (2020-10-30T10:59:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.