A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for
  Nonconvex-Concave Min-Max Problems
        - URL: http://arxiv.org/abs/2010.15768v2
 - Date: Mon, 4 Jul 2022 23:17:33 GMT
 - Title: A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for
  Nonconvex-Concave Min-Max Problems
 - Authors: Jiawei Zhang, Peijun Xiao, Ruoyu Sun and Zhi-Quan Luo
 - Abstract summary: Non-con-max problem arises in many applications including minimizing a pointwise set of non functions to solve this robust problem.
A.A. algorithm can achieve an $(/A-O-) of $(/A-O-)$ for a finite collection of non functions.
 - Score: 33.83671897619922
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Nonconvex-concave min-max problem arises in many machine learning
applications including minimizing a pointwise maximum of a set of nonconvex
functions and robust adversarial training of neural networks. A popular
approach to solve this problem is the gradient descent-ascent (GDA) algorithm
which unfortunately can exhibit oscillation in case of nonconvexity. In this
paper, we introduce a "smoothing" scheme which can be combined with GDA to
stabilize the oscillation and ensure convergence to a stationary solution. We
prove that the stabilized GDA algorithm can achieve an $O(1/\epsilon^2)$
iteration complexity for minimizing the pointwise maximum of a finite
collection of nonconvex functions. Moreover, the smoothed GDA algorithm
achieves an $O(1/\epsilon^4)$ iteration complexity for general
nonconvex-concave problems. Extensions of this stabilized GDA algorithm to
multi-block cases are presented. To the best of our knowledge, this is the
first algorithm to achieve $O(1/\epsilon^2)$ for a class of nonconvex-concave
problem. We illustrate the practical efficiency of the stabilized GDA algorithm
on robust training.
 
       
      
        Related papers
        - Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax   Optimization [77.3396841985172]
We provide a unified analysis of two-timescale gradient ascent (TTGDA) for solving structured non minimax optimization problems.
Our contribution is to design TTGDA algorithms are effective beyond the setting.
arXiv  Detail & Related papers  (2024-08-21T20:14:54Z) - Zeroth-Order primal-dual Alternating Projection Gradient Algorithms for
  Nonconvex Minimax Problems with Coupled linear Constraints [3.898056433020865]
We propose two zero-order regular complexity algorithms for non minimax problems with linear constraints.
To the best of our knowledge, they are first two zero-order algorithms with best for noncal complexity.
arXiv  Detail & Related papers  (2024-01-26T11:22:13Z) - An accelerated first-order regularized momentum descent ascent algorithm   for stochastic nonconvex-concave minimax problems [0.4910937238451484]
We have an accelerated regularized momentum descent ascent algorithm (FORMDA) for solving non-concave mini problems.
The best complexity for bound algorithms to solve non-concave minimax problems under the station of objectivearity function.
arXiv  Detail & Related papers  (2023-10-24T01:45:11Z) - Decentralized Riemannian Algorithm for Nonconvex Minimax Problems [82.50374560598493]
The minimax algorithms for neural networks have been developed to solve many problems.
In this paper, we propose two types of minimax algorithms.
For the setting, we propose DRSGDA and prove that our method achieves a gradient.
arXiv  Detail & Related papers  (2023-02-08T01:42:45Z) - Adaptive Federated Minimax Optimization with Lower Complexities [82.51223883622552]
We propose an efficient adaptive minimax optimization algorithm (i.e., AdaFGDA) to solve these minimax problems.
It builds our momentum-based reduced and localSGD techniques, and it flexibly incorporate various adaptive learning rates.
arXiv  Detail & Related papers  (2022-11-14T12:32:18Z) - A Single-Loop Gradient Descent and Perturbed Ascent Algorithm for
  Nonconvex Functional Constrained Optimization [27.07082875330508]
Non constrained inequality problems can be used to model a number machine learning problems, such as multi-class Neyman oracle.
Under such a mild condition of regularity it is difficult to balance reduction alternating value loss and reduction constraint violation.
In this paper, we propose a novel primal-dual inequality constrained problems algorithm.
arXiv  Detail & Related papers  (2022-07-12T16:30:34Z) - Accelerated Proximal Alternating Gradient-Descent-Ascent for Nonconvex
  Minimax Machine Learning [12.069630105460766]
An Alternating Table-descentascent (AltGDA) is an computation optimization algorithm that has been widely used for training in various machine learning applications.
In this paper, we develop a single-loop fast computation-of-the-loop gradient-of-the-loop algorithm to solve non minimax optimization problems.
arXiv  Detail & Related papers  (2021-12-22T04:33:27Z) - Derivative-free Alternating Projection Algorithms for General
  Nonconvex-Concave Minimax Problems [9.173866646584031]
In this paper, we propose an algorithm for nonsmooth zeroth-order minimax problems.
We show that it can be used to attack nonconcave minimax problems.
arXiv  Detail & Related papers  (2021-08-01T15:23:49Z) - Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel
  Optimization [62.87181271021217]
We consider non-SBO problems that have many applications in machine learning.
This paper proposes fast randomized algorithms for non-SBO problems.
arXiv  Detail & Related papers  (2021-05-05T18:28:42Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
  for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv  Detail & Related papers  (2021-02-15T07:10:33Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
  Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv  Detail & Related papers  (2020-08-23T20:36:49Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
  Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv  Detail & Related papers  (2020-06-16T17:55:46Z) - A Unified Single-loop Alternating Gradient Projection Algorithm for
  Nonconvex-Concave and Convex-Nonconcave Minimax Problems [8.797831153231664]
We develop an efficient algorithm for solving minimax problems with theoretical general convexnon objective guarantees.
We show that the proposed algorithm can be used to solve both noncaveepsilon concave (strongly) and (strongly) convexnonconcave minimax problems.
arXiv  Detail & Related papers  (2020-06-03T04:00:52Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.