Federated Multi-Sequence Stochastic Approximation with Local
Hypergradient Estimation
- URL: http://arxiv.org/abs/2306.01648v1
- Date: Fri, 2 Jun 2023 16:17:43 GMT
- Title: Federated Multi-Sequence Stochastic Approximation with Local
Hypergradient Estimation
- Authors: Davoud Ataee Tarzanagh, Mingchen Li, Pranay Sharma, Samet Oymak
- Abstract summary: We develop FedMSA, the first federated approximation algorithm for multiple coupled sequences (MSA)
FedMSA enables the provable estimation of hypergradients in BLO and MCO via local client updates.
We provide experiments that support our theory and demonstrate the empirical benefits of FedMSA.
- Score: 28.83712379658548
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stochastic approximation with multiple coupled sequences (MSA) has found
broad applications in machine learning as it encompasses a rich class of
problems including bilevel optimization (BLO), multi-level compositional
optimization (MCO), and reinforcement learning (specifically, actor-critic
methods). However, designing provably-efficient federated algorithms for MSA
has been an elusive question even for the special case of double sequence
approximation (DSA). Towards this goal, we develop FedMSA which is the first
federated algorithm for MSA, and establish its near-optimal communication
complexity. As core novelties, (i) FedMSA enables the provable estimation of
hypergradients in BLO and MCO via local client updates, which has been a
notable bottleneck in prior theory, and (ii) our convergence guarantees are
sensitive to the heterogeneity-level of the problem. We also incorporate
momentum and variance reduction techniques to achieve further acceleration
leading to near-optimal rates. Finally, we provide experiments that support our
theory and demonstrate the empirical benefits of FedMSA. As an example, FedMSA
enables order-of-magnitude savings in communication rounds compared to prior
federated BLO schemes.
Related papers
- Adaptive Swarm Mesh Refinement using Deep Reinforcement Learning with Local Rewards [12.455977048107671]
Adaptive Mesh Refinement (AMR) improves the Finite Element Method (FEM)
We formulate AMR as a system of collaborating, homogeneous agents that iteratively split into multiple new agents.
Our approach, Adaptive Swarm Mesh Refinement (ASMR), offers efficient, stable optimization and generates highly adaptive meshes at user-defined resolution during inference.
arXiv Detail & Related papers (2024-06-12T17:26:54Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Federated Compositional Deep AUC Maximization [58.25078060952361]
We develop a novel federated learning method for imbalanced data by directly optimizing the area under curve (AUC) score.
To the best of our knowledge, this is the first work to achieve such favorable theoretical results.
arXiv Detail & Related papers (2023-04-20T05:49:41Z) - Communication-Efficient Federated Bilevel Optimization with Local and
Global Lower Level Problems [118.00379425831566]
We propose a communication-efficient algorithm, named FedBiOAcc.
We prove that FedBiOAcc-Local converges at the same rate for this type of problems.
Empirical results show superior performance of our algorithms.
arXiv Detail & Related papers (2023-02-13T21:28:53Z) - Federated Expectation Maximization with heterogeneity mitigation and
variance reduction [0.0]
This paper introduces FedEM, which is the first extension of the Expectation Maximization (EM) algorithm for latent variable models.
To alleviate complexity of communication, FedEM appropriately defined complete data sufficient statistics.
Results presented support our theoretical findings as well as an application handles missing values imputation for biodiversity monitoring.
arXiv Detail & Related papers (2021-11-03T09:14:34Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Tight Mutual Information Estimation With Contrastive Fenchel-Legendre
Optimization [69.07420650261649]
We introduce a novel, simple, and powerful contrastive MI estimator named as FLO.
Empirically, our FLO estimator overcomes the limitations of its predecessors and learns more efficiently.
The utility of FLO is verified using an extensive set of benchmarks, which also reveals the trade-offs in practical MI estimation.
arXiv Detail & Related papers (2021-07-02T15:20:41Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.