Accelerating Smooth Games by Manipulating Spectral Shapes
- URL: http://arxiv.org/abs/2001.00602v2
- Date: Mon, 9 Mar 2020 14:51:46 GMT
- Title: Accelerating Smooth Games by Manipulating Spectral Shapes
- Authors: Wa\"iss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon
Lacoste-Julien, Gauthier Gidel
- Abstract summary: We use matrix iteration theory to characterize acceleration in smooth games.
We describe gradient-based methods, such as extragradient, as transformations on the spectral shape.
We propose an optimal algorithm for bilinear games.
- Score: 51.366219027745174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We use matrix iteration theory to characterize acceleration in smooth games.
We define the spectral shape of a family of games as the set containing all
eigenvalues of the Jacobians of standard gradient dynamics in the family.
Shapes restricted to the real line represent well-understood classes of
problems, like minimization. Shapes spanning the complex plane capture the
added numerical challenges in solving smooth games. In this framework, we
describe gradient-based methods, such as extragradient, as transformations on
the spectral shape. Using this perspective, we propose an optimal algorithm for
bilinear games. For smooth and strongly monotone operators, we identify a
continuum between convex minimization, where acceleration is possible using
Polyak's momentum, and the worst case where gradient descent is optimal.
Finally, going beyond first-order methods, we propose an accelerated version of
consensus optimization.
Related papers
- Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Gradient-Variation Online Learning under Generalized Smoothness [56.38427425920781]
gradient-variation online learning aims to achieve regret guarantees that scale with variations in gradients of online functions.
Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms.
We provide the applications for fast-rate convergence in games and extended adversarial optimization.
arXiv Detail & Related papers (2024-08-17T02:22:08Z) - Euler's Elastica Based Cartoon-Smooth-Texture Image Decomposition [4.829677240798159]
We propose a novel model for decomposing grayscale images into three distinct components.
The structural part represents strong boundaries and regions with strong light-to-dark transitions; the smooth part, capturing soft shadows and shadows; and the oscillatory, characterizing textures and noise.
arXiv Detail & Related papers (2024-07-03T03:42:33Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Convex and Non-convex Optimization Under Generalized Smoothness [69.69521650503431]
An analysis of convex and non- optimization methods often requires the Lipsitzness gradient, which limits the analysis by this trajectorys.
Recent work generalizes the gradient setting via the non-uniform smoothness condition.
arXiv Detail & Related papers (2023-06-02T04:21:59Z) - Adaptive Approximate Implicitization of Planar Parametric Curves via
Weak Gradient Constraints [11.559302398966468]
We introduce a new regularization constraint for both and non-polynomial curves.
We then propose two adaptive algorithms of approximate implicitization for both and non-polynomial curves.
More precisely, the idea is gradually increasing the implicit degree, until there is no obvious improvement in the weak gradient loss of the outputs.
arXiv Detail & Related papers (2023-02-23T04:08:53Z) - Hessian Based Smoothing Splines for Manifold Learning [0.228438857884398]
We propose a multidimensional smoothing spline algorithm in the context of manifold learning.
We generalize the bending energy penalty of thin-plate splines to a quadratic form on the Sobolev space of a flat manifold.
The existence and uniqueness of the solution is shown by applying the theory of reproducing Hilbert spaces.
arXiv Detail & Related papers (2023-02-10T02:49:05Z) - No-Regret Dynamics in the Fenchel Game: A Unified Framework for
Algorithmic Convex Optimization [20.718016474717196]
We develop an algorithmic framework for solving convex optimization problems using no-regret game dynamics.
A common choice for these strategies are so-called no-regret learning algorithms.
We show that many classical first-order methods for convex optimization can be interpreted as special cases of our framework.
arXiv Detail & Related papers (2021-11-22T16:10:18Z) - Average-case Acceleration for Bilinear Games and Normal Matrices [35.14328074551363]
We show that for zero-sum bilinear games the average-case optimal method is the optimal method for the minimization of the Hamiltonian.
We specialize it to matrices with eigenvalues located in a disk and show a provable speed-up compared to worst-case optimal algorithms.
arXiv Detail & Related papers (2020-10-05T15:13:37Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.