On the Hypomonotone Class of Variational Inequalities
- URL: http://arxiv.org/abs/2410.09182v1
- Date: Fri, 11 Oct 2024 18:35:48 GMT
- Title: On the Hypomonotone Class of Variational Inequalities
- Authors: Khaled Alomar, Tatjana Chavdarova,
- Abstract summary: We study the behavior of the extragradient algorithm when applied to hypomonotone operators.
We provide a characterization theorem that identifies the conditions under which the extragradient algorithm fails to converge.
- Score: 4.204990010424083
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper studies the behavior of the extragradient algorithm when applied to hypomonotone operators, a class of problems that extends beyond the classical monotone setting. While the extragradient method is widely known for its efficacy in solving variational inequalities with monotone and Lipschitz continuous operators, we demonstrate that its convergence is not guaranteed in the hypomonotone setting. We provide a characterization theorem that identifies the conditions under which the extragradient algorithm fails to converge. Our results highlight the necessity of stronger assumptions to guarantee convergence of extragradient and to further develop the existing VI methods for broader problems.
Related papers
- An Augmented Lagrangian Approach to Conically Constrained Non-monotone
Variational Inequality Problems [8.609626012634559]
We introduce an augmented Lagrangian primal-dual method, to be called ALAVI, for solving a general constrained VI model.
We show that under a metric subregularity condition, even if the VI model may be non-monotone the local convergence rate of ALAVI improves to be linear.
arXiv Detail & Related papers (2023-06-02T00:33:15Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Single-Call Stochastic Extragradient Methods for Structured Non-monotone
Variational Inequalities: Improved Analysis under Weaker Conditions [21.681130192954583]
Single-call extragradient methods are one of the most efficient algorithms for solving large-scale min-max optimization problems.
We provide convergence guarantees for two classes of VIPs: (i) quasi-strongly monotone problems (a generalization of strongly monotone problems) and (ii) weak Minty variational inequalities (a generalization of monotone and Minty VIPs)
Our convergence analysis holds under the arbitrary sampling paradigm, which includes importance sampling and various mini-batching strategies as special cases.
arXiv Detail & Related papers (2023-02-27T18:53:28Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Distributed Stochastic Optimization under a General Variance Condition [13.911633636387059]
Distributed optimization has drawn great attention recently due to its effectiveness in solving largescale machine learning problems.
We revisit the classical Federated Averaging (Avg) and establish the convergence results under only a mild variance for smooth non objective functions.
Almost a stationary convergence point is also established under the gradients condition.
arXiv Detail & Related papers (2023-01-30T05:48:09Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Tight Last-Iterate Convergence of the Extragradient Method for
Constrained Monotone Variational Inequalities [4.6193503399184275]
We show the last-iterate convergence rate of the extragradient method for monotone and Lipschitz variational inequalities with constraints.
We develop a new approach that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient method.
arXiv Detail & Related papers (2022-04-20T05:12:11Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.