Faster Adaptive Momentum-Based Federated Methods for Distributed
Composition Optimization
- URL: http://arxiv.org/abs/2211.01883v2
- Date: Thu, 30 Mar 2023 03:39:28 GMT
- Title: Faster Adaptive Momentum-Based Federated Methods for Distributed
Composition Optimization
- Authors: Feihu Huang
- Abstract summary: We propose a class of faster federated composition optimization algorithms (i.e. MFCGD and AdaMFCGD) to solve the non distributed composition problems.
In particular, our adaptive algorithm (i.e., AdaMFCGD) uses a unified adaptive matrix to flexibly incorporate various adaptive learning rates.
- Score: 14.579475552088692
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated Learning is a popular distributed learning paradigm in machine
learning. Meanwhile, composition optimization is an effective hierarchical
learning model, which appears in many machine learning applications such as
meta learning and robust learning. More recently, although a few federated
composition optimization algorithms have been proposed, they still suffer from
high sample and communication complexities. In the paper, thus, we propose a
class of faster federated compositional optimization algorithms (i.e., MFCGD
and AdaMFCGD) to solve the nonconvex distributed composition problems, which
builds on the momentum-based variance reduced and local-SGD techniques. In
particular, our adaptive algorithm (i.e., AdaMFCGD) uses a unified adaptive
matrix to flexibly incorporate various adaptive learning rates. Moreover, we
provide a solid theoretical analysis for our algorithms under non-i.i.d.
setting, and prove our algorithms obtain a lower sample and communication
complexities simultaneously than the existing federated compositional
algorithms. Specifically, our algorithms obtain lower sample complexity of
$\tilde{O}(\epsilon^{-3})$ with lower communication complexity of
$\tilde{O}(\epsilon^{-2})$ in finding an $\epsilon$-stationary solution. We
conduct the numerical experiments on robust federated learning and distributed
meta learning tasks to demonstrate the efficiency of our algorithms.
Related papers
- Faster Adaptive Decentralized Learning Algorithms [24.379734053137597]
We propose a class of faster distributed non decentralized algorithms (i.e. AdaMDOS and AdaMDOF) for adaptive learning and finite-sum optimization.
Some experimental results demonstrate efficiency of our algorithms.
arXiv Detail & Related papers (2024-08-19T08:05:33Z) - Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data [23.661713049508375]
We propose an algorithm that learns over a submanifold in the setting of a client.
We show that our proposed algorithm converges sub-ly to a neighborhood of a first-order optimal solution by using a novel analysis.
arXiv Detail & Related papers (2024-06-12T17:53:28Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Adaptive Federated Minimax Optimization with Lower Complexities [82.51223883622552]
We propose an efficient adaptive minimax optimization algorithm (i.e., AdaFGDA) to solve these minimax problems.
It builds our momentum-based reduced and localSGD techniques, and it flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-14T12:32:18Z) - Fast Adaptive Federated Bilevel Optimization [14.579475552088692]
We propose a novel adaptive federated bilevel optimization algorithm (i.e.,AdaFBiO) to solve the distributed bilevel optimization problems.
AdaFBiO uses the unified adaptive matrices to flexibly incorporate various adaptive learning rates to update variables in both UL and LL problems.
We provide a convergence analysis framework for our AdaFBiO algorithm, and prove it needs the sample of complexity of $tildeO(epsilon-3)$ with communication complexity of $tildeO(epsilon-2)$ to obtain an $
arXiv Detail & Related papers (2022-11-02T13:55:47Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.