Adaptive and Universal Algorithms for Variational Inequalities with
Optimal Convergence
- URL: http://arxiv.org/abs/2010.07799v3
- Date: Thu, 26 Aug 2021 20:37:19 GMT
- Title: Adaptive and Universal Algorithms for Variational Inequalities with
Optimal Convergence
- Authors: Alina Ene, Huy L. Nguyen
- Abstract summary: We develop new adaptive algorithms for variational inequalities with monotone operators.
Our algorithms automatically adapt to unknown problem parameters.
We show that our algorithms are universal and simultaneously achieve the optimal convergence rates.
- Score: 29.189409618561964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop new adaptive algorithms for variational inequalities with monotone
operators, which capture many problems of interest, notably convex optimization
and convex-concave saddle point problems. Our algorithms automatically adapt to
unknown problem parameters such as the smoothness and the norm of the operator,
and the variance of the stochastic evaluation oracle. We show that our
algorithms are universal and simultaneously achieve the optimal convergence
rates in the non-smooth, smooth, and stochastic settings. The convergence
guarantees of our algorithms improve over existing adaptive methods by a
$\Omega(\sqrt{\ln T})$ factor, matching the optimal non-adaptive algorithms.
Additionally, prior works require that the optimization domain is bounded. In
this work, we remove this restriction and give algorithms for unbounded domains
that are adaptive and universal. Our general proof techniques can be used for
many variants of the algorithm using one or two operator evaluations per
iteration. The classical methods based on the ExtraGradient/MirrorProx
algorithm require two operator evaluations per iteration, which is the dominant
factor in the running time in many settings.
Related papers
- Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling [96.47086913559289]
gradient-based algorithms are widely used in bilevel optimization.
We introduce a without-replacement sampling based algorithm which achieves a faster convergence rate.
We validate our algorithms over both synthetic and real-world applications.
arXiv Detail & Related papers (2024-11-07T17:05:31Z) - Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation [0.0]
An automated sizing approach for analog circuits is presented in this paper.
A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function.
The algorithms are tuned and modified to converge to a better optimal solution.
arXiv Detail & Related papers (2023-10-19T03:26:36Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Optimal and Practical Algorithms for Smooth and Strongly Convex
Decentralized Optimization [21.555331273873175]
We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network.
We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees.
arXiv Detail & Related papers (2020-06-21T11:23:20Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.