Simple and optimal methods for stochastic variational inequalities, I:
operator extrapolation
- URL: http://arxiv.org/abs/2011.02987v5
- Date: Mon, 19 Jun 2023 06:52:49 GMT
- Title: Simple and optimal methods for stochastic variational inequalities, I:
operator extrapolation
- Authors: Georgios Kotsalis, Guanghui Lan, Tianjiao Li
- Abstract summary: We first present a novel operator extrapolation (OE) method for solving deterministic variational inequality (VI) problems.
We then introduce the operator extrapolation (SOE) method and establish its optimal convergence behavior for solving different inequality VI problems.
- Score: 9.359939442911127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we first present a novel operator extrapolation (OE) method for
solving deterministic variational inequality (VI) problems. Similar to the
gradient (operator) projection method, OE updates one single search sequence by
solving a single projection subproblem in each iteration. We show that OE can
achieve the optimal rate of convergence for solving a variety of VI problems in
a much simpler way than existing approaches. We then introduce the stochastic
operator extrapolation (SOE) method and establish its optimal convergence
behavior for solving different stochastic VI problems. In particular, SOE
achieves the optimal complexity for solving a fundamental problem, i.e.,
stochastic smooth and strongly monotone VI, for the first time in the
literature. We also present a stochastic block operator extrapolations (SBOE)
method to further reduce the iteration cost for the OE method applied to
large-scale deterministic VIs with a certain block structure. Numerical
experiments have been conducted to demonstrate the potential advantages of the
proposed algorithms. In fact, all these algorithms are applied to solve
generalized monotone variational inequality (GMVI) problems whose operator is
not necessarily monotone. We will also discuss optimal OE-based policy
evaluation methods for reinforcement learning in a companion paper.
Related papers
- Primal Methods for Variational Inequality Problems with Functional Constraints [25.261426717550293]
We propose a primal method, termed Constrained Gradient Method (CGM), for addressing functional constrained variational inequality problems.
We establish a non-asymptotic convergence analysis of the algorithm for variational inequality problems with monotone operators under smooth constraints.
Our algorithms match the complexity of projection-based methods in terms of operator queries for both monotone and strongly monotone settings.
arXiv Detail & Related papers (2024-03-19T16:03:03Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Sample Average Approximation for Black-Box VI [31.664188645648156]
We present a novel approach for black-box VI that bypasses the difficulties of gradient ascent.
We approximate the solution of optimization problems by transforming them into deterministic ones.
Our experiments show that our method simplifies the VI problem and achieves faster performance than existing methods.
arXiv Detail & Related papers (2023-04-13T20:04:47Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Solving Constrained Variational Inequalities via an Interior Point
Method [88.39091990656107]
We develop an interior-point approach to solve constrained variational inequality (cVI) problems.
We provide convergence guarantees for ACVI in two general classes of problems.
Unlike previous work in this setting, ACVI provides a means to solve cVIs when the constraints are nontrivial.
arXiv Detail & Related papers (2022-06-21T17:55:13Z) - Meta-learning based Alternating Minimization Algorithm for Non-convex
Optimization [9.774392581946108]
We propose a novel solution for challenging non-problems of multiple variables.
Our proposed approach is able to achieve effective iterations in cases while other methods would typically fail.
arXiv Detail & Related papers (2020-09-09T10:45:00Z) - On the implementation of a global optimization method for mixed-variable
problems [0.30458514384586394]
The algorithm is based on the radial basis function of Gutmann and the metric response surface method of Regis and Shoemaker.
We propose several modifications aimed at generalizing and improving these two algorithms.
arXiv Detail & Related papers (2020-09-04T13:36:56Z) - Follow the bisector: a simple method for multi-objective optimization [65.83318707752385]
We consider optimization problems, where multiple differentiable losses have to be minimized.
The presented method computes descent direction in every iteration to guarantee equal relative decrease of objective functions.
arXiv Detail & Related papers (2020-07-14T09:50:33Z) - The Strength of Nesterov's Extrapolation in the Individual Convergence
of Nonsmooth Optimization [0.0]
We prove that Nesterov's extrapolation has the strength to make the individual convergence of gradient descent methods optimal for nonsmooth problems.
We give an extension of the derived algorithms to solve regularized learning tasks with nonsmooth losses in settings.
Our method is applicable as an efficient tool for solving large-scale $l$1-regularized hinge-loss learning problems.
arXiv Detail & Related papers (2020-06-08T03:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.