Stochastic Modified Flows, Mean-Field Limits and Dynamics of Stochastic
Gradient Descent
- URL: http://arxiv.org/abs/2302.07125v1
- Date: Tue, 14 Feb 2023 15:33:59 GMT
- Title: Stochastic Modified Flows, Mean-Field Limits and Dynamics of Stochastic
Gradient Descent
- Authors: Benjamin Gess, Sebastian Kassing, Vitalii Konarovskyi
- Abstract summary: We propose new limiting dynamics for gradient descent in the small learning rate regime called modified flows.
These SDEs are driven by a cylindrical Brownian motion and improve the so-called modified equations by having regular diffusion coefficients and by matching the multi-point statistics.
- Score: 1.2031796234206138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose new limiting dynamics for stochastic gradient descent in the small
learning rate regime called stochastic modified flows. These SDEs are driven by
a cylindrical Brownian motion and improve the so-called stochastic modified
equations by having regular diffusion coefficients and by matching the
multi-point statistics. As a second contribution, we introduce distribution
dependent stochastic modified flows which we prove to describe the fluctuating
limiting dynamics of stochastic gradient descent in the small learning rate -
infinite width scaling regime.
Related papers
- Limit Theorems for Stochastic Gradient Descent with Infinite Variance [47.87144151929621]
We show that the gradient descent algorithm can be characterized as the stationary distribution of a suitably defined Ornstein-rnstein process driven by an appropriate L'evy process.
We also explore the applications of these results in linear regression and logistic regression models.
arXiv Detail & Related papers (2024-10-21T09:39:10Z) - Generalizing Stochastic Smoothing for Differentiation and Gradient Estimation [59.86921150579892]
We deal with the problem of gradient estimation for differentiable relaxations of algorithms, operators, simulators, and other non-differentiable functions.
We develop variance reduction strategies for differentiable sorting and ranking, differentiable shortest-paths on graphs, differentiable rendering for pose estimation, as well as differentiable cryo-ET simulations.
arXiv Detail & Related papers (2024-10-10T17:10:00Z) - Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution for Weak Features [8.645858565518155]
We provide a formula for computing the difference between test risk curves of pure gradient and gradient flows.
We explicitly compute the corrections brought about by the added term in the dynamics.
The analytical results are compared to simulations of discrete-time gradient descent and show good agreement.
arXiv Detail & Related papers (2024-02-12T13:11:11Z) - Convergence of mean-field Langevin dynamics: Time and space
discretization, stochastic gradient, and variance reduction [49.66486092259376]
The mean-field Langevin dynamics (MFLD) is a nonlinear generalization of the Langevin dynamics that incorporates a distribution-dependent drift.
Recent works have shown that MFLD globally minimizes an entropy-regularized convex functional in the space of measures.
We provide a framework to prove a uniform-in-time propagation of chaos for MFLD that takes into account the errors due to finite-particle approximation, time-discretization, and gradient approximation.
arXiv Detail & Related papers (2023-06-12T16:28:11Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Reservoir Computing with Error Correction: Long-term Behaviors of
Stochastic Dynamical Systems [5.815325960286111]
We propose a data-driven framework combining Reservoir Computing and Normalizing Flow to study this issue.
We verify the effectiveness of the proposed framework in several experiments, including the Van der Pal, El Nino-Southern Oscillation simplified model, and Lorenz system.
arXiv Detail & Related papers (2023-05-01T05:50:17Z) - Continuous-time stochastic gradient descent for optimizing over the
stationary distribution of stochastic differential equations [7.65995376636176]
We develop a new continuous-time gradient descent method for optimizing over the stationary distribution oficity differential equation (SDE) models.
We rigorously prove convergence of the online forward propagation algorithm for linear SDE models and present its numerical results for nonlinear examples.
arXiv Detail & Related papers (2022-02-14T11:45:22Z) - Noise and Fluctuation of Finite Learning Rate Stochastic Gradient
Descent [3.0079490585515343]
gradient descent (SGD) is relatively well understood in the vanishing learning rate regime.
We propose to study the basic properties of SGD and its variants in the non-vanishing learning rate regime.
arXiv Detail & Related papers (2020-12-07T12:31:43Z) - Dynamical mean-field theory for stochastic gradient descent in Gaussian
mixture classification [25.898873960635534]
We analyze in a closed learning dynamics of gradient descent (SGD) for a single-layer neural network classifying a high-dimensional landscape.
We define a prototype process for which can be extended to a continuous-dimensional gradient flow.
In the full-batch limit, we recover the standard gradient flow.
arXiv Detail & Related papers (2020-06-10T22:49:41Z) - Stochastic Normalizing Flows [52.92110730286403]
We introduce normalizing flows for maximum likelihood estimation and variational inference (VI) using differential equations (SDEs)
Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated, enabling efficient training of neural SDEs.
These SDEs can be used for constructing efficient chains to sample from the underlying distribution of a given dataset.
arXiv Detail & Related papers (2020-02-21T20:47:55Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.