Beyond the Golden Ratio for Variational Inequality Algorithms
- URL: http://arxiv.org/abs/2212.13955v1
- Date: Wed, 28 Dec 2022 16:58:48 GMT
- Title: Beyond the Golden Ratio for Variational Inequality Algorithms
- Authors: Ahmet Alacaoglu, Axel B\"ohm, Yura Malitsky
- Abstract summary: We improve the understanding of the $textitgolden ratio algorithm$, which solves monotone variational inequalities (VI) and convex-concave min-max problems.
We introduce a new analysis that allows to use larger step sizes, to complete the bridge between the golden ratio algorithm and the existing algorithms in the literature.
- Score: 12.470097382737933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We improve the understanding of the $\textit{golden ratio algorithm}$, which
solves monotone variational inequalities (VI) and convex-concave min-max
problems via the distinctive feature of adapting the step sizes to the local
Lipschitz constants. Adaptive step sizes not only eliminate the need to pick
hyperparameters, but they also remove the necessity of global Lipschitz
continuity and can increase from one iteration to the next.
We first establish the equivalence of this algorithm with popular VI methods
such as reflected gradient, Popov or optimistic gradient descent-ascent in the
unconstrained case with constant step sizes. We then move on to the constrained
setting and introduce a new analysis that allows to use larger step sizes, to
complete the bridge between the golden ratio algorithm and the existing
algorithms in the literature. Doing so, we actually eliminate the link between
the golden ratio $\frac{1+\sqrt{5}}{2}$ and the algorithm. Moreover, we improve
the adaptive version of the algorithm, first by removing the maximum step size
hyperparameter (an artifact from the analysis) to improve the complexity bound,
and second by adjusting it to nonmonotone problems with weak Minty solutions,
with superior empirical performance.
Related papers
- Improved Parallel Algorithm for Non-Monotone Submodular Maximization under Knapsack Constraint [0.0]
This work proposes an efficient parallel algorithm for non-monomodular size under a knapsack constraint.
Our algorithm improves the existing parallel one from $8+epsilon$ to $7+epsilon$ with $O(log n)$ adaptive complexity.
arXiv Detail & Related papers (2024-09-06T17:17:52Z) - Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization [32.939120407900035]
Our algorithms feature a simple update rule that requires solving only one linear system per iteration.
We also evaluate the practical performance of our algorithm by comparing it to existing second-order algorithms for minimax optimization.
arXiv Detail & Related papers (2024-06-04T06:56:41Z) - Accelerated SGD for Non-Strongly-Convex Least Squares [14.010916616909743]
We consider approximation for the least squares regression problem in the non-strongly convex setting.
We present the first practical algorithm that achieves the optimal prediction error rates in terms of dependence on the noise of the problem.
arXiv Detail & Related papers (2022-03-03T14:39:33Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Train simultaneously, generalize better: Stability of gradient-based
minimax learners [12.691047660244331]
We show a key role in the performance of the trained minimax model under both convex concave and non-concave minimax settings.
We discuss several numerical results indicating the role of optimization algorithms in the generalization of learned minimax models.
arXiv Detail & Related papers (2020-10-23T17:44:43Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.