Higher-order methods for convex-concave min-max optimization and
monotone variational inequalities
- URL: http://arxiv.org/abs/2007.04528v1
- Date: Thu, 9 Jul 2020 03:12:33 GMT
- Title: Higher-order methods for convex-concave min-max optimization and
monotone variational inequalities
- Authors: Brian Bullins and Kevin A. Lai
- Abstract summary: We provide improved convergence rates for constrained convex-concave min-max problems and monotone variational inequalities with higher-order smoothness.
For $p>2$, our results improve upon the iteration complexity of the first-order Mirror Prox method of Nemirovski.
We further instantiate our entire algorithm in the unconstrained $p=2$ case.
- Score: 7.645449711892907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide improved convergence rates for constrained convex-concave min-max
problems and monotone variational inequalities with higher-order smoothness. In
min-max settings where the $p^{th}$-order derivatives are Lipschitz continuous,
we give an algorithm HigherOrderMirrorProx that achieves an iteration
complexity of $O(1/T^{\frac{p+1}{2}})$ when given access to an oracle for
finding a fixed point of a $p^{th}$-order equation. We give analogous rates for
the weak monotone variational inequality problem. For $p>2$, our results
improve upon the iteration complexity of the first-order Mirror Prox method of
Nemirovski [2004] and the second-order method of Monteiro and Svaiter [2012].
We further instantiate our entire algorithm in the unconstrained $p=2$ case.
Related papers
- First-Order Methods for Linearly Constrained Bilevel Optimization [38.19659447295665]
We present first-order linearly constrained optimization methods for high-level Hessian computations.
For linear inequality constraints, we attain $(delta,epsilon)$-Goldstein stationarity in $widetildeO(ddelta-1 epsilon-3)$ gradient oracle calls.
arXiv Detail & Related papers (2024-06-18T16:41:21Z) - Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity [18.427215139020632]
We show that $1L 1$ can be used to improve some state-of-the-art problems even for a multilevel Carlo iteration.
We provide an analysis for inexact Halperness estimators for $1L 1$ when the only hold with respect to a solution is a new $1L 1$ theory.
arXiv Detail & Related papers (2024-02-07T18:22:41Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [116.89941263390769]
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $min_mathbfxmax_mathbfyF(mathbfx) + H(mathbfx,mathbfy)$, where one has access to first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.
We present a emphaccelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragrad
arXiv Detail & Related papers (2022-06-17T06:10:20Z) - Perseus: A Simple and Optimal High-Order Method for Variational
Inequalities [81.32967242727152]
A VI involves finding $xstar in mathcalX$ such that $langle F(x), x - xstarrangle geq 0$ for all $x in mathcalX$.
We propose a $pth$-order method that does textitnot require any line search procedure and provably converges to a weak solution at a rate of $O(epsilon-2/(p+1))$.
arXiv Detail & Related papers (2022-05-06T13:29:14Z) - Generalized Optimistic Methods for Convex-Concave Saddle Point Problems [24.5327016306566]
The optimistic method has seen increasing popularity for solving convex-concave saddle point problems.
We develop a backtracking line search scheme to select the step sizes without knowledge of coefficients.
arXiv Detail & Related papers (2022-02-19T20:31:05Z) - A first-order primal-dual method with adaptivity to local smoothness [64.62056765216386]
We consider the problem of finding a saddle point for the convex-concave objective $min_x max_y f(x) + langle Ax, yrangle - g*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth.
We propose an adaptive version of the Condat-Vu algorithm, which alternates between primal gradient steps and dual steps.
arXiv Detail & Related papers (2021-10-28T14:19:30Z) - Second-Order Information in Non-Convex Stochastic Optimization: Power
and Limitations [54.42518331209581]
We find an algorithm which finds.
epsilon$-approximate stationary point (with $|nabla F(x)|le epsilon$) using.
$(epsilon,gamma)$surimate random random points.
Our lower bounds here are novel even in the noiseless case.
arXiv Detail & Related papers (2020-06-24T04:41:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.