Constants of Motion: The Antidote to Chaos in Optimization and Game
Dynamics
- URL: http://arxiv.org/abs/2109.03974v2
- Date: Fri, 10 Sep 2021 13:46:59 GMT
- Title: Constants of Motion: The Antidote to Chaos in Optimization and Game
Dynamics
- Authors: Georgios Piliouras and Xiao Wang
- Abstract summary: Several recent works in online optimization and game dynamics have established strong negative complexity results.
These results motivate the following question: Which methodological tools can guarantee the regularity of such dynamics?
We show how proving the existence of invariant functions, i.e., constant of motions, is a fundamental contribution in this direction.
- Score: 36.09131227448527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several recent works in online optimization and game dynamics have
established strong negative complexity results including the formal emergence
of instability and chaos even in small such settings, e.g., $2\times 2$ games.
These results motivate the following question: Which methodological tools can
guarantee the regularity of such dynamics and how can we apply them in standard
settings of interest such as discrete-time first-order optimization dynamics?
We show how proving the existence of invariant functions, i.e., constant of
motions, is a fundamental contribution in this direction and establish a
plethora of such positive results (e.g. gradient descent, multiplicative
weights update, alternating gradient descent and manifold gradient descent)
both in optimization as well as in game settings. At a technical level, for
some conservation laws we provide an explicit and concise closed form, whereas
for other ones we present non-constructive proofs using tools from dynamical
systems.
Related papers
- Series of Hessian-Vector Products for Tractable Saddle-Free Newton
Optimisation of Neural Networks [1.3654846342364308]
We show that a first-scalable optimisation algorithm can efficiently use the exact inverse Hessian with absolute-value eigenvalues.
A t-run of this series provides a new optimisation which is comparable to other first- and second-order optimisation methods.
arXiv Detail & Related papers (2023-10-23T13:11:30Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Omega: Optimistic EMA Gradients [17.11877451333674]
We introduce Omega, a method with optimistic-like updates that mitigates the impact of noise by incorporating historic gradients in its update rule.
Although we do not provide convergence guarantees, our experiments on games show that Omega outperforms the optimistic gradient method when applied to linear players.
arXiv Detail & Related papers (2023-06-13T16:56:13Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Value Iteration in Continuous Actions, States and Time [99.00362538261972]
We propose a continuous fitted value iteration (cFVI) algorithm for continuous states and actions.
The optimal policy can be derived for non-linear control-affine dynamics.
Videos of the physical system are available at urlhttps://sites.google.com/view/value-iteration.
arXiv Detail & Related papers (2021-05-10T21:40:56Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Accelerating Smooth Games by Manipulating Spectral Shapes [51.366219027745174]
We use matrix iteration theory to characterize acceleration in smooth games.
We describe gradient-based methods, such as extragradient, as transformations on the spectral shape.
We propose an optimal algorithm for bilinear games.
arXiv Detail & Related papers (2020-01-02T19:21:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.