A Primal-dual Approach for Solving Variational Inequalities with
General-form Constraints
- URL: http://arxiv.org/abs/2210.15659v3
- Date: Wed, 29 Mar 2023 08:54:29 GMT
- Title: A Primal-dual Approach for Solving Variational Inequalities with
General-form Constraints
- Authors: Tatjana Chavdarova, Matteo Pagliardini, Tong Yang, Michael I. Jordan
- Abstract summary: Yang et al. (2023) recently addressed the open problem of solving Variational Inequalities (VIs) with equality and inequality constraints through a first-order method.
In this paper, we adopt a warm-starting technique where we solve the subproblems approximately at each iteration.
We prove its convergence and show that the gap function of the last iterate of this inexact-ACVI method decreases at a rate of $mathcalO(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
- Score: 81.32297040574083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Yang et al. (2023) recently addressed the open problem of solving Variational
Inequalities (VIs) with equality and inequality constraints through a
first-order gradient method. However, the proposed primal-dual method called
ACVI is applicable when we can compute analytic solutions of its subproblems;
thus, the general case remains an open problem. In this paper, we adopt a
warm-starting technique where we solve the subproblems approximately at each
iteration and initialize the variables with the approximate solution found at
the previous iteration. We prove its convergence and show that the gap function
of the last iterate of this inexact-ACVI method decreases at a rate of
$\mathcal{O}(\frac{1}{\sqrt{K}})$ when the operator is $L$-Lipschitz and
monotone, provided that the errors decrease at appropriate rates.
Interestingly, we show that often in numerical experiments, this technique
converges faster than its exact counterpart. Furthermore, for the cases when
the inequality constraints are simple, we propose a variant of ACVI named
P-ACVI and prove its convergence for the same setting. We further demonstrate
the efficacy of the proposed methods through numerous experiments. We also
relax the assumptions in Yang et al., yielding, to our knowledge, the first
convergence result that does not rely on the assumption that the operator is
$L$-Lipschitz. Our source code is provided at
$\texttt{https://github.com/mpagli/Revisiting-ACVI}$.
Related papers
- Invex Programs: First Order Algorithms and Their Convergence [66.40124280146863]
Invex programs are a special kind of non-constrained problems which attain global minima at every stationary point.
We propose new first-order algorithms to solve general convergence rates in beyondvex problems.
Our proposed algorithm is the first algorithm to solve constrained invex programs.
arXiv Detail & Related papers (2023-07-10T10:11:01Z) - First-order methods for Stochastic Variational Inequality problems with Function Constraints [3.922842284927803]
This paper presents novel first-order methods for the function-constrained Variational Inequality (FCVI) problem in smooth or nonsmooth settings.
All our algorithms easily extend to saddle point problems with function constraints that couple the primal and dual variables while maintaining the same complexity results.
arXiv Detail & Related papers (2023-04-10T17:07:55Z) - Escaping limit cycles: Global convergence for constrained
nonconvex-nonconcave minimax problems [46.71914490521821]
This paper introduces a new extragradient-type algorithm for a class of non-nonconcave minimax problems.
The proposed algorithm is applicable to constrained and regularized problems.
arXiv Detail & Related papers (2023-02-20T08:37:08Z) - Minimax Instrumental Variable Regression and $L_2$ Convergence
Guarantees without Identification or Closedness [71.42652863687117]
We study nonparametric estimation of instrumental variable (IV) regressions.
We propose a new penalized minimax estimator that can converge to a fixed IV solution.
We derive a strong $L$ error rate for our estimator under lax conditions.
arXiv Detail & Related papers (2023-02-10T18:08:49Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Solving Constrained Variational Inequalities via an Interior Point
Method [88.39091990656107]
We develop an interior-point approach to solve constrained variational inequality (cVI) problems.
We provide convergence guarantees for ACVI in two general classes of problems.
Unlike previous work in this setting, ACVI provides a means to solve cVIs when the constraints are nontrivial.
arXiv Detail & Related papers (2022-06-21T17:55:13Z) - Gradient-Free Methods for Saddle-Point Problem [125.99533416395765]
We generalize the approach Gasnikov et. al., 2017, which allows to solve (stochastic) convex optimization problems with an inexact gradient-free oracle.
Our approach reduces $fracnlog n$ times the required number of oracle calls.
In the second part of the paper, we analyze the case when such an assumption cannot be made, we propose a general approach on how to modernize the method to solve this problem.
arXiv Detail & Related papers (2020-05-12T16:44:27Z) - Inexact and Stochastic Generalized Conditional Gradient with Augmented
Lagrangian and Proximal Step [2.0196229393131726]
We analyze inexact and versions of the CGALP algorithm developed in the authors' previous paper.
This allows one to compute some gradients, terms, and/or linear minimization oracles in an inexact fashion.
We show convergence of the Lagrangian to an optimum and feasibility of the affine constraint.
arXiv Detail & Related papers (2020-05-11T14:52:16Z) - A New Randomized Primal-Dual Algorithm for Convex Optimization with
Optimal Last Iterate Rates [16.54912614895861]
We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems.
We prove that our algorithm achieves optimal convergence rates in two cases: general convexity and strong convexity.
Our results show that the proposed method has encouraging performance on different experiments.
arXiv Detail & Related papers (2020-03-03T03:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.