Solving Constrained Variational Inequalities via an Interior Point
Method
- URL: http://arxiv.org/abs/2206.10575v1
- Date: Tue, 21 Jun 2022 17:55:13 GMT
- Title: Solving Constrained Variational Inequalities via an Interior Point
Method
- Authors: Tong Yang, Michael I. Jordan, Tatjana Chavdarova
- Abstract summary: We develop an interior-point approach to solve constrained variational inequality (cVI) problems.
We provide convergence guarantees for ACVI in two general classes of problems.
Unlike previous work in this setting, ACVI provides a means to solve cVIs when the constraints are nontrivial.
- Score: 88.39091990656107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an interior-point approach to solve constrained variational
inequality (cVI) problems. Inspired by the efficacy of the alternating
direction method of multipliers (ADMM) method in the single-objective context,
we generalize ADMM to derive a first-order method for cVIs, that we refer to as
ADMM-based interior point method for constrained VIs (ACVI). We provide
convergence guarantees for ACVI in two general classes of problems: (i) when
the operator is $\xi$-monotone, and (ii) when it is monotone, the constraints
are active and the game is not purely rotational. When the operator is in
addition L-Lipschitz for the latter case, we match known lower bounds on rates
for the gap function of $\mathcal{O}(1/\sqrt{K})$ and $\mathcal{O}(1/K)$ for
the last and average iterate, respectively. To the best of our knowledge, this
is the first presentation of a first-order interior-point method for the
general cVI problem that has a global convergence guarantee. Moreover, unlike
previous work in this setting, ACVI provides a means to solve cVIs when the
constraints are nontrivial. Empirical analyses demonstrate clear advantages of
ACVI over common first-order methods. In particular, (i) cyclical behavior is
notably reduced as our methods approach the solution from the analytic center,
and (ii) unlike projection-based methods that oscillate when near a constraint,
ACVI efficiently handles the constraints.
Related papers
- Online Learning with Adversaries: A Differential-Inclusion Analysis [52.43460995467893]
We introduce an observation-matrix-based framework for fully asynchronous online Federated Learning with adversaries.
Our main result is that the proposed algorithm almost surely converges to the desired mean $mu.$
We derive this convergence using a novel differential-inclusion-based two-timescale analysis.
arXiv Detail & Related papers (2023-04-04T04:32:29Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - DRSOM: A Dimension Reduced Second-Order Method [13.778619250890406]
Under a trust-like framework, our method preserves the convergence of the second-order method while using only information in a few directions.
Theoretically, we show that the method has a local convergence and a global convergence rate of $O(epsilon-3/2)$ to satisfy the first-order and second-order conditions.
arXiv Detail & Related papers (2022-07-30T13:05:01Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z) - Converting ADMM to a Proximal Gradient for Convex Optimization Problems [4.56877715768796]
In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem.
This paper proposes a general method for converting the ADMM solution to the proximal gradient method, assuming that the constraints and objectives are strongly convex.
We show by numerical experiments that we can obtain a significant improvement in terms of efficiency.
arXiv Detail & Related papers (2021-04-22T07:41:12Z) - On the Convergence of Continuous Constrained Optimization for Structure
Learning [30.279796192573805]
We show the convergence of the augmented Lagrangian method (ALM) and the quadratic penalty method (QPM) for structure learning in the linear, nonlinear, and confounded cases.
We further establish the convergence guarantee of QPM to a DAG solution, under mild conditions.
arXiv Detail & Related papers (2020-11-23T00:29:37Z) - Simple and optimal methods for stochastic variational inequalities, I:
operator extrapolation [9.359939442911127]
We first present a novel operator extrapolation (OE) method for solving deterministic variational inequality (VI) problems.
We then introduce the operator extrapolation (SOE) method and establish its optimal convergence behavior for solving different inequality VI problems.
arXiv Detail & Related papers (2020-11-05T17:20:19Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.