LEAD: Min-Max Optimization from a Physical Perspective
- URL: http://arxiv.org/abs/2010.13846v4
- Date: Wed, 21 Jun 2023 17:15:09 GMT
- Title: LEAD: Min-Max Optimization from a Physical Perspective
- Authors: Reyhane Askari Hemmat, Amartya Mitra, Guillaume Lajoie, Ioannis
Mitliagkas
- Abstract summary: Adversarial formulations such as generative adversarial networks (GANs) have rekindled interest in two-player min-max games.
A central obstacle in the optimization of such games is the rotational dynamics that hinder their convergence.
We show that game optimization shares dynamic properties with particle systems subject to multiple forces, and one can leverage tools from physics to improve dynamics.
- Score: 11.808346987640855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial formulations such as generative adversarial networks (GANs) have
rekindled interest in two-player min-max games. A central obstacle in the
optimization of such games is the rotational dynamics that hinder their
convergence. In this paper, we show that game optimization shares dynamic
properties with particle systems subject to multiple forces, and one can
leverage tools from physics to improve optimization dynamics. Inspired by the
physical framework, we propose LEAD, an optimizer for min-max games. Next,
using Lyapunov stability theory and spectral analysis, we study LEAD's
convergence properties in continuous and discrete time settings for a class of
quadratic min-max games to demonstrate linear convergence to the Nash
equilibrium. Finally, we empirically evaluate our method on synthetic setups
and CIFAR-10 image generation to demonstrate improvements in GAN training.
Related papers
- Conformal Symplectic Optimization for Stable Reinforcement Learning [21.491621524500736]
By utilizing relativistic kinetic energy, RAD incorporates from special relativity and limits parameter updates below a finite speed, effectively mitigating abnormal influences.
Notably, RAD achieves up to a 155.1% performance improvement, showcasing its efficacy in training Atari games.
arXiv Detail & Related papers (2024-12-03T09:07:31Z) - Track Everything Everywhere Fast and Robustly [46.362962852140015]
We propose a novel test-time optimization approach for efficiently tracking any pixel in a video.
We introduce a novel invertible deformation network, CaDeX++, which factorizes the function representation into a local spatial-temporal feature grid.
Our experiments demonstrate a substantial improvement in training speed (more than textbf10 times faster), robustness, and accuracy in tracking over the SoTA optimization-based method OmniMotion.
arXiv Detail & Related papers (2024-03-26T17:58:22Z) - SDEs for Minimax Optimization [11.290653315174382]
In this paper, we pioneer the use of differential equations (SDEs) to analyze and compare Minimax convergences.
Our SDE models for Gradient Descent-Ascent, Extragradient, and Hamiltonian Gradient Descent are provable approximations of their algorithmic counterparts.
This perspective also allows for a unified and simplified analysis strategy based on the principles of Ito calculus.
arXiv Detail & Related papers (2024-02-19T20:18:29Z) - Energy-based Potential Games for Joint Motion Forecasting and Control [0.125828876338076]
This work uses game theory as a mathematical framework to address interaction modeling in motion forecasting and control.
We establish a connection between differential games, optimal control, and energy-based models, demonstrating how existing approaches can be unified under our proposed Energy-based Potential Game formulation.
We introduce a new end-to-end learning application that combines neural networks for game- parameter inference with a differentiable game-theoretic optimization layer, acting as an inductive bias.
arXiv Detail & Related papers (2023-12-04T11:30:26Z) - On the Convergence of No-Regret Learning Dynamics in Time-Varying Games [89.96815099996132]
We characterize the convergence of optimistic gradient descent (OGD) in time-varying games.
Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games.
We also provide new insights on dynamic regret guarantees in static games.
arXiv Detail & Related papers (2023-01-26T17:25:45Z) - Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization [108.35402316802765]
We propose a new first-order optimization algorithm -- AcceleratedGradient-OptimisticGradient (AG-OG) Ascent.
We show that AG-OG achieves the optimal convergence rate (up to a constant) for a variety of settings.
We extend our algorithm to extend the setting and achieve the optimal convergence rate in both bi-SC-SC and bi-C-SC settings.
arXiv Detail & Related papers (2022-10-31T17:59:29Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - Constants of Motion: The Antidote to Chaos in Optimization and Game
Dynamics [36.09131227448527]
Several recent works in online optimization and game dynamics have established strong negative complexity results.
These results motivate the following question: Which methodological tools can guarantee the regularity of such dynamics?
We show how proving the existence of invariant functions, i.e., constant of motions, is a fundamental contribution in this direction.
arXiv Detail & Related papers (2021-09-08T23:37:13Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - On the Suboptimality of Negative Momentum for Minimax Optimization [9.400440302623839]
We show that negative momentum accelerates convergence of game dynamics locally, though with a suboptimal rate.
This is the first work that provides an explicit convergence rate momentum in this setting.
arXiv Detail & Related papers (2020-08-17T16:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.