Solving Non-Convex Non-Differentiable Min-Max Games using Proximal
Gradient Method
- URL: http://arxiv.org/abs/2003.08093v1
- Date: Wed, 18 Mar 2020 08:38:34 GMT
- Title: Solving Non-Convex Non-Differentiable Min-Max Games using Proximal
Gradient Method
- Authors: Babak Barazandeh and Meisam Razaviyayn
- Abstract summary: Min-max saddle point descenders appear in a wide range of applications in machine leaning and signal processing.
We show that a simple multi-step LA-ascent algorithm is stronger than existing ones in the literature.
- Score: 10.819408603463426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Min-max saddle point games appear in a wide range of applications in machine
leaning and signal processing. Despite their wide applicability, theoretical
studies are mostly limited to the special convex-concave structure. While some
recent works generalized these results to special smooth non-convex cases, our
understanding of non-smooth scenarios is still limited. In this work, we study
special form of non-smooth min-max games when the objective function is
(strongly) convex with respect to one of the player's decision variable. We
show that a simple multi-step proximal gradient descent-ascent algorithm
converges to $\epsilon$-first-order Nash equilibrium of the min-max game with
the number of gradient evaluations being polynomial in $1/\epsilon$. We will
also show that our notion of stationarity is stronger than existing ones in the
literature. Finally, we evaluate the performance of the proposed algorithm
through adversarial attack on a LASSO estimator.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Bridging Discrete and Backpropagation: Straight-Through and Beyond [62.46558842476455]
We propose a novel approach to approximate the gradient of parameters involved in generating discrete latent variables.
We propose ReinMax, which achieves second-order accuracy by integrating Heun's method, a second-order numerical method for solving ODEs.
arXiv Detail & Related papers (2023-04-17T20:59:49Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Alternating Mirror Descent for Constrained Min-Max Games [44.46086335474311]
We study two-player bilinear zero-sum games with constrained strategy spaces.
We analyze the alternating mirror descent algorithm, in which each player takes turns to take action following the mirror descent algorithm for constrained optimization.
arXiv Detail & Related papers (2022-06-08T20:48:16Z) - Convex-Concave Min-Max Stackelberg Games [0.0]
We introduce two first-order methods that solve a class of convex-concave min-max Stackelberg games.
We show that our methods converge in time.
We observe that the computation of competitive equilibria in Fisher markets also comprises a min-max Stackelberg game.
arXiv Detail & Related papers (2021-10-05T06:09:45Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Direct-Search for a Class of Stochastic Min-Max Problems [0.0]
We investigate the use of derivative-search methods that only access objective through oracle.
We prove convergence of this technique under mild assumptions.
Our analysis is the first one to address the convergence of a direct-search method for minmax objectives in a setting.
arXiv Detail & Related papers (2021-02-22T22:23:58Z) - The Complexity of Constrained Min-Max Optimization [29.57458485068705]
We show that an approximate local point large enough min-max is guaranteed to exist.
More importantly, we show an approximate fixed gradient Descent/Ascent approximation complete.
Our result is the first to show an exponential approximation of two fundamental optimization problems.
arXiv Detail & Related papers (2020-09-21T05:54:12Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z) - Accelerating Smooth Games by Manipulating Spectral Shapes [51.366219027745174]
We use matrix iteration theory to characterize acceleration in smooth games.
We describe gradient-based methods, such as extragradient, as transformations on the spectral shape.
We propose an optimal algorithm for bilinear games.
arXiv Detail & Related papers (2020-01-02T19:21:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.