PRECISION: Decentralized Constrained Min-Max Learning with Low
Communication and Sample Complexities
- URL: http://arxiv.org/abs/2303.02532v1
- Date: Sun, 5 Mar 2023 00:26:10 GMT
- Title: PRECISION: Decentralized Constrained Min-Max Learning with Low
Communication and Sample Complexities
- Authors: Zhuqing Liu, Xin Zhang, Songtao Lu, and Jia Liu
- Abstract summary: We show an adaptive multi-agent learning technique for min-max optimization problems.
We also propose an algorithm called PRECISION that enjoys a reduction in the number of iterations.
- Score: 25.153506493249854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, min-max optimization problems have received increasing attention
due to their wide range of applications in machine learning (ML). However, most
existing min-max solution techniques are either single-machine or distributed
algorithms coordinated by a central server. In this paper, we focus on the
decentralized min-max optimization for learning with domain constraints, where
multiple agents collectively solve a nonconvex-strongly-concave min-max saddle
point problem without coordination from any server. Decentralized min-max
optimization problems with domain constraints underpins many important ML
applications, including multi-agent ML fairness assurance, and policy
evaluations in multi-agent reinforcement learning. We propose an algorithm
called PRECISION (proximal gradient-tracking and stochastic recursive variance
reduction) that enjoys a convergence rate of $O(1/T)$, where $T$ is the maximum
number of iterations. To further reduce sample complexity, we propose
PRECISION$^+$ with an adaptive batch size technique. We show that the fast
$O(1/T)$ convergence of PRECISION and PRECISION$^+$ to an $\epsilon$-stationary
point imply $O(\epsilon^{-2})$ communication complexity and
$O(m\sqrt{n}\epsilon^{-2})$ sample complexity, where $m$ is the number of
agents and $n$ is the size of dataset at each agent. To our knowledge, this is
the first work that achieves $O(\epsilon^{-2})$ in both sample and
communication complexities in decentralized min-max learning with domain
constraints. Our experiments also corroborate the theoretical results.
Related papers
- Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency [52.60557300927007]
We present a $textbfMA-OSMA$ algorithm to transfer the discrete submodular problem into a continuous optimization.
We also introduce a projection-free $textbfMA-OSEA$ algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution.
Our algorithms significantly improve the $(frac11+c)$-approximation provided by the state-of-the-art OSG algorithm.
arXiv Detail & Related papers (2025-02-07T15:57:56Z) - Optimal Decentralized Smoothed Online Convex Optimization [9.449153668916098]
We study the multi-agent Smoothed Online Convex Optimization (SOCO) problem, where $N$ agents interact through a communication graph.
We propose the first truly decentralized algorithm ACORD for multi-agent SOCO that provably exhibits optimality.
Our results hold even when the communication graph changes arbitrarily and adaptively over time.
arXiv Detail & Related papers (2024-11-13T05:59:04Z) - Solving a Class of Non-Convex Minimax Optimization in Federated Learning [84.98927714326908]
The minimax problems arise throughout machine learning applications, ranging from machine learning training to large-scale learning.
We propose a class of algorithms for non minimax problems (emphi) that reduce complexity to $varepsilon-6)$.
We prove that FedSGDA-M has the best sample complexity of $O(kappa2-3)$ and the best-known communication of $O(kappa2-3)$.
arXiv Detail & Related papers (2023-10-05T15:48:41Z) - Variance-reduced accelerated methods for decentralized stochastic
double-regularized nonconvex strongly-concave minimax problems [7.5573375809946395]
We consider a network of $m$ computing agents collaborate via peer-to-peer communications.
Our algorithmic framework introduces agrangian multiplier to eliminate the consensus constraint on the dual variable.
To the best of our knowledge, this is the first work which provides convergence guarantees for NCSC minimax problems with general non regularizers applied to both the primal and dual variables.
arXiv Detail & Related papers (2023-07-14T01:32:16Z) - Cooperative Multi-Agent Reinforcement Learning: Asynchronous
Communication and Linear Function Approximation [77.09836892653176]
We study multi-agent reinforcement learning in the setting of episodic Markov decision processes.
We propose a provably efficient algorithm based on value that enable asynchronous communication.
We show that a minimal $Omega(dM)$ communication complexity is required to improve the performance through collaboration.
arXiv Detail & Related papers (2023-05-10T20:29:29Z) - Decentralized Riemannian Algorithm for Nonconvex Minimax Problems [82.50374560598493]
The minimax algorithms for neural networks have been developed to solve many problems.
In this paper, we propose two types of minimax algorithms.
For the setting, we propose DRSGDA and prove that our method achieves a gradient.
arXiv Detail & Related papers (2023-02-08T01:42:45Z) - INTERACT: Achieving Low Sample and Communication Complexities in
Decentralized Bilevel Learning over Networks [24.02913189682224]
Decentralized bilevel optimization problems have received increasing attention in the networking machine learning communities.
Low sample and communication complexities are two fundamental challenges that remain under-explored.
Our work is the first that both low sample and communication complexities for solving decentralized bilevel optimization problems over networks.
arXiv Detail & Related papers (2022-07-27T04:19:28Z) - Multi-Agent Off-Policy TD Learning: Finite-Time Analysis with
Near-Optimal Sample Complexity and Communication Complexity [13.100926925535578]
We develop two decentralized TD with correction (TDC) algorithms for multi-agent off-policy TD learning.
Our algorithms preserve full privacy of the actions, policies and rewards of the agents, and adopt mini-batch sampling to reduce the sampling variance and communication frequency.
arXiv Detail & Related papers (2021-03-24T12:48:08Z) - The Min-Max Complexity of Distributed Stochastic Convex Optimization
with Intermittent Communication [61.60069865891783]
We resolve the min-max complexity of distributed convex optimization (up to a log factor) in the intermittent communication setting.
We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.
arXiv Detail & Related papers (2021-02-02T16:18:29Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.