An Augmented Lagrangian Approach to Conically Constrained Non-monotone
Variational Inequality Problems
- URL: http://arxiv.org/abs/2306.01214v1
- Date: Fri, 2 Jun 2023 00:33:15 GMT
- Title: An Augmented Lagrangian Approach to Conically Constrained Non-monotone
Variational Inequality Problems
- Authors: Lei Zhao, Daoli Zhu, Shuzhong Zhang
- Abstract summary: We introduce an augmented Lagrangian primal-dual method, to be called ALAVI, for solving a general constrained VI model.
We show that under a metric subregularity condition, even if the VI model may be non-monotone the local convergence rate of ALAVI improves to be linear.
- Score: 8.609626012634559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we consider a non-monotone (mixed) variational inequality model
with (nonlinear) convex conic constraints. Through developing an equivalent
Lagrangian function-like primal-dual saddle-point system for the VI model in
question, we introduce an augmented Lagrangian primal-dual method, to be called
ALAVI in the current paper, for solving a general constrained VI model. Under
an assumption, to be called the primal-dual variational coherence condition in
the paper, we prove the convergence of ALAVI. Next, we show that many existing
generalized monotonicity properties are sufficient -- though by no means
necessary -- to imply the above mentioned coherence condition, thus are
sufficient to ensure convergence of ALAVI. Under that assumption, we further
show that ALAVI has in fact an $o(1/\sqrt{k})$ global rate of convergence where
$k$ is the iteration count. By introducing a new gap function, this rate
further improves to be $O(1/k)$ if the mapping is monotone. Finally, we show
that under a metric subregularity condition, even if the VI model may be
non-monotone the local convergence rate of ALAVI improves to be linear.
Numerical experiments on some randomly generated highly nonlinear and
non-monotone VI problems show practical efficacy of the newly proposed method.
Related papers
- On the Hypomonotone Class of Variational Inequalities [4.204990010424083]
We study the behavior of the extragradient algorithm when applied to hypomonotone operators.
We provide a characterization theorem that identifies the conditions under which the extragradient algorithm fails to converge.
arXiv Detail & Related papers (2024-10-11T18:35:48Z) - A Unified Analysis for the Subgradient Methods Minimizing Composite
Nonconvex, Nonsmooth and Non-Lipschitz Functions [8.960341489080609]
We present a novel convergence analysis in context of non-Lipschitz and nonsmooth optimization problems.
Under any of the subgradient upper bounding conditions to be introduced in the paper, we show that $O (1/stqrT)$ holds in terms of the square gradient of the envelope function, which further improves to be $O (1/T)$ if, in addition, the uniform KL condition with $1/2$ exponents holds.
arXiv Detail & Related papers (2023-08-30T23:34:11Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - Solving Constrained Variational Inequalities via an Interior Point
Method [88.39091990656107]
We develop an interior-point approach to solve constrained variational inequality (cVI) problems.
We provide convergence guarantees for ACVI in two general classes of problems.
Unlike previous work in this setting, ACVI provides a means to solve cVIs when the constraints are nontrivial.
arXiv Detail & Related papers (2022-06-21T17:55:13Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Tight Last-Iterate Convergence of the Extragradient Method for
Constrained Monotone Variational Inequalities [4.6193503399184275]
We show the last-iterate convergence rate of the extragradient method for monotone and Lipschitz variational inequalities with constraints.
We develop a new approach that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient method.
arXiv Detail & Related papers (2022-04-20T05:12:11Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z) - Variance-Reduced Splitting Schemes for Monotone Stochastic Generalized
Equations [0.0]
We consider monotone inclusion problems where the operators may be expectation-valued.
A direct application of splitting schemes is complicated by the need to resolve problems with expectation-valued maps at each step.
We propose an avenue for addressing uncertainty in the mapping: Variance-reduced modified forward-backward splitting scheme.
arXiv Detail & Related papers (2020-08-26T02:33:27Z) - Linear Last-iterate Convergence in Constrained Saddle-point Optimization [48.44657553192801]
We significantly expand the understanding of last-rate uniqueness for Optimistic Gradient Descent Ascent (OGDA) and Optimistic Multiplicative Weights Update (OMWU)
We show that when the equilibrium is unique, linear lastiterate convergence is achieved with a learning rate whose value is set to a universal constant.
We show that bilinear games over any polytope satisfy this condition and OGDA converges exponentially fast even without the unique equilibrium assumption.
arXiv Detail & Related papers (2020-06-16T20:53:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.