Solving Inverse Stochastic Problems from Discrete Particle Observations
Using the Fokker-Planck Equation and Physics-informed Neural Networks
- URL: http://arxiv.org/abs/2008.10653v1
- Date: Mon, 24 Aug 2020 18:51:56 GMT
- Title: Solving Inverse Stochastic Problems from Discrete Particle Observations
Using the Fokker-Planck Equation and Physics-informed Neural Networks
- Authors: Xiaoli Chen, Liu Yang, Jinqiao Duan, George Em Karniadakis
- Abstract summary: We develop a framework based on physics-informed neural networks (PINNs)
PINNs connect divergence samples with the Fokker-Planck equation, to simultaneously learn the equation and infer the multi-dimensional probability density function.
We demonstrate results for up to 5D demonstrating that we can infer both the FP equation and dynamics simultaneously at all times with high accuracy.
- Score: 7.6595660586147325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Fokker-Planck (FP) equation governing the evolution of the probability
density function (PDF) is applicable to many disciplines but it requires
specification of the coefficients for each case, which can be functions of
space-time and not just constants, hence requiring the development of a
data-driven modeling approach. When the data available is directly on the PDF,
then there exist methods for inverse problems that can be employed to infer the
coefficients and thus determine the FP equation and subsequently obtain its
solution. Herein, we address a more realistic scenario, where only sparse data
are given on the particles' positions at a few time instants, which are not
sufficient to accurately construct directly the PDF even at those times from
existing methods, e.g., kernel estimation algorithms. To this end, we develop a
general framework based on physics-informed neural networks (PINNs) that
introduces a new loss function using the Kullback-Leibler divergence to connect
the stochastic samples with the FP equation, to simultaneously learn the
equation and infer the multi-dimensional PDF at all times. In particular, we
consider two types of inverse problems, type I where the FP equation is known
but the initial PDF is unknown, and type II in which, in addition to unknown
initial PDF, the drift and diffusion terms are also unknown. In both cases, we
investigate problems with either Brownian or Levy noise or a combination of
both. We demonstrate the new PINN framework in detail in the one-dimensional
case (1D) but we also provide results for up to 5D demonstrating that we can
infer both the FP equation and} dynamics simultaneously at all times with high
accuracy using only very few discrete observations of the particles.
Related papers
- Adaptive Probability Flow Residual Minimization for High-Dimensional Fokker-Planck Equations [14.22534820071447]
Solving high-dimensional Fokker-Planck equations is a challenge in computational physics and dynamics.<n>Existing deep learning approaches, such as Physics-Informed Neural Networks, face computational challenges as dimensionality increases.<n>We propose the Adaptive Probability Flow Residual Minimization (A-PFRM) method.
arXiv Detail & Related papers (2025-12-22T09:31:31Z) - Latent Schrodinger Bridge: Prompting Latent Diffusion for Fast Unpaired Image-to-Image Translation [58.19676004192321]
Diffusion models (DMs), which enable both image generation from noise and inversion from data, have inspired powerful unpaired image-to-image (I2I) translation algorithms.
We tackle this problem with Schrodinger Bridges (SBs), which are differential equations (SDEs) between distributions with minimal transport cost.
Inspired by this observation, we propose Latent Schrodinger Bridges (LSBs) that approximate the SB ODE via pre-trained Stable Diffusion.
We demonstrate that our algorithm successfully conduct competitive I2I translation in unsupervised setting with only a fraction of cost required by previous DM-
arXiv Detail & Related papers (2024-11-22T11:24:14Z) - Error Bounds for Physics-Informed Neural Networks in Fokker-Planck PDEs [11.729744197698718]
We show that physics-informed neural networks (PINNs) can be trained to approximate the probability density function (PDF)
Our main contribution is the analysis of PINN approximation error.
We derive a practical error bound that can be efficiently constructed with standard training methods.
arXiv Detail & Related papers (2024-10-28T23:25:55Z) - Straightness of Rectified Flow: A Theoretical Insight into Wasserstein Convergence [54.580605276017096]
Diffusion models have emerged as a powerful tool for image generation and denoising.
Recently, Liu et al. designed a novel alternative generative model Rectified Flow (RF)
RF aims to learn straight flow trajectories from noise to data using a sequence of convex optimization problems.
arXiv Detail & Related papers (2024-10-19T02:36:11Z) - Weak Collocation Regression for Inferring Stochastic Dynamics with
L\'{e}vy Noise [8.15076267771005]
We propose a weak form of the Fokker-Planck (FP) equation for extracting dynamics with L'evy noise.
Our approach can simultaneously distinguish mixed noise types, even in multi-dimensional problems.
arXiv Detail & Related papers (2024-03-13T06:54:38Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Generative Adversarial Networks to infer velocity components in rotating
turbulent flows [2.0873604996221946]
We show that CNN and GAN always outperform EPOD both concerning point-wise and statistical reconstructions.
The analysis is performed using both standard validation tools based on $L$ spatial distance between the prediction and the ground truth.
arXiv Detail & Related papers (2023-01-18T13:59:01Z) - Uncertainty Quantification for Transport in Porous media using
Parameterized Physics Informed neural Networks [0.0]
We present a Parametrization of the Informed Neural Network (P-PINN) approach to tackle the problem of uncertainty quantification in reservoir engineering problems.
We demonstrate the approach with the immiscible two phase flow displacement (Buckley-Leverett problem) in heterogeneous porous medium.
We show that provided a proper parameterization of the uncertainty space, PINN can produce solutions that match closely both the ensemble realizations and the moments.
arXiv Detail & Related papers (2022-05-19T06:23:23Z) - Learning Functional Priors and Posteriors from Data and Physics [3.537267195871802]
We develop a new framework based on deep neural networks to be able to extrapolate in space-time using historical data.
We employ the physics-informed Generative Adversarial Networks (PI-GAN) to learn a functional prior.
At the second stage, we employ the Hamiltonian Monte Carlo (HMC) method to estimate the posterior in the latent space of PI-GANs.
arXiv Detail & Related papers (2021-06-08T03:03:24Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Exponentially Weighted l_2 Regularization Strategy in Constructing
Reinforced Second-order Fuzzy Rule-based Model [72.57056258027336]
In the conventional Takagi-Sugeno-Kang (TSK)-type fuzzy models, constant or linear functions are usually utilized as the consequent parts of the fuzzy rules.
We introduce an exponential weight approach inspired by the weight function theory encountered in harmonic analysis.
arXiv Detail & Related papers (2020-07-02T15:42:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.