Self-Consistency of the Fokker-Planck Equation
- URL: http://arxiv.org/abs/2206.00860v1
- Date: Thu, 2 Jun 2022 03:44:23 GMT
- Title: Self-Consistency of the Fokker-Planck Equation
- Authors: Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Aim Karbasi,
Hamed Hassani
- Abstract summary: The Fokker-Planck equation governs the density evolution of the Ito process.
Ground-truth velocity field can be shown to be the solution of a fixed-point equation.
In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields.
- Score: 117.17004717792344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Fokker-Planck equation (FPE) is the partial differential equation that
governs the density evolution of the It\^o process and is of great importance
to the literature of statistical physics and machine learning. The FPE can be
regarded as a continuity equation where the change of the density is completely
determined by a time varying velocity field. Importantly, this velocity field
also depends on the current density function. As a result, the ground-truth
velocity field can be shown to be the solution of a fixed-point equation, a
property that we call self-consistency. In this paper, we exploit this concept
to design a potential function of the hypothesis velocity fields, and prove
that, if such a function diminishes to zero during the training procedure, the
trajectory of the densities generated by the hypothesis velocity fields
converges to the solution of the FPE in the Wasserstein-2 sense. The proposed
potential function is amenable to neural-network based parameterization as the
stochastic gradient with respect to the parameter can be efficiently computed.
Once a parameterized model, such as Neural Ordinary Differential Equation is
trained, we can generate the entire trajectory to the FPE.
Related papers
- A score-based particle method for homogeneous Landau equation [7.600098227248821]
We propose a novel score-based particle method for solving the Landau equation in plasmas.
Our primary innovation lies in recognizing that this nonlinearity is in the form of the score function.
We provide a theoretical estimate by demonstrating that the KL divergence between our approximation and the true solution can be effectively controlled by the score-matching loss.
arXiv Detail & Related papers (2024-05-08T16:22:47Z) - Real-time dynamics of false vacuum decay [49.1574468325115]
We investigate false vacuum decay of a relativistic scalar field in the metastable minimum of an asymmetric double-well potential.
We employ the non-perturbative framework of the two-particle irreducible (2PI) quantum effective action at next-to-leading order in a large-N expansion.
arXiv Detail & Related papers (2023-10-06T12:44:48Z) - PINF: Continuous Normalizing Flows for Physics-Constrained Deep Learning [8.000355537589224]
In this paper, we introduce Physics-Informed Normalizing Flows (PINF), a novel extension of continuous normalizing flows.
Our method, which is mesh-free and causality-free, can efficiently solve high dimensional time-dependent and steady-state Fokker-Planck equations.
arXiv Detail & Related papers (2023-09-26T15:38:57Z) - FP-IRL: Fokker-Planck-based Inverse Reinforcement Learning -- A
Physics-Constrained Approach to Markov Decision Processes [0.5735035463793008]
Inverse Reinforcement Learning (IRL) is a technique for revealing the rationale underlying the behavior of autonomous agents.
IRL seeks to estimate the unknown reward function of a Markov decision process (MDP) from observed agent trajectories.
We create a novel IRL algorithm, FP-IRL, which can simultaneously infer the transition and reward functions using only observed trajectories.
arXiv Detail & Related papers (2023-06-17T18:28:03Z) - Self-Consistent Velocity Matching of Probability Flows [22.2542921090435]
We present a discretization-free scalable framework for solving a class of partial differential equations (PDEs)
The main observation is that the time-varying velocity field of the PDE solution needs to be self-consistent.
We use an iterative formulation with a biased gradient estimator that bypasses significant computational obstacles with strong empirical performance.
arXiv Detail & Related papers (2023-01-31T16:17:18Z) - Forecasting subcritical cylinder wakes with Fourier Neural Operators [58.68996255635669]
We apply a state-of-the-art operator learning technique to forecast the temporal evolution of experimentally measured velocity fields.
We find that FNOs are capable of accurately predicting the evolution of experimental velocity fields throughout the range of Reynolds numbers tested.
arXiv Detail & Related papers (2023-01-19T20:04:36Z) - The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
and Anomalous Diffusion [29.489737359897312]
We study the limiting dynamics of deep neural networks trained with gradient descent (SGD)
We show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity and probability currents, which cause oscillations in phase space.
arXiv Detail & Related papers (2021-07-19T20:18:57Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.