Learning the temporal evolution of multivariate densities via
normalizing flows
- URL: http://arxiv.org/abs/2107.13735v1
- Date: Thu, 29 Jul 2021 04:05:02 GMT
- Title: Learning the temporal evolution of multivariate densities via
normalizing flows
- Authors: Yubin Lu, Romit Maulik, Ting Gao, Felix Dietrich, Ioannis G.
Kevrekidis, Jinqiao Duan
- Abstract summary: We propose a method to learn probability distributions using sample path data from differential equations.
We analyze this evolution through machine learning assisted construction of a time-dependent mapping.
We demonstrate that this approach can learn solutions to non-local Fokker-Planck equations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a method to learn probability distributions using
sample path data from stochastic differential equations. Specifically, we
consider temporally evolving probability distributions (e.g., those produced by
integrating local or nonlocal Fokker-Planck equations). We analyze this
evolution through machine learning assisted construction of a time-dependent
mapping that takes a reference distribution (say, a Gaussian) to each and every
instance of our evolving distribution. If the reference distribution is the
initial condition of a Fokker-Planck equation, what we learn is the time-T map
of the corresponding solution. Specifically, the learned map is a normalizing
flow that deforms the support of the reference density to the support of each
and every density snapshot in time. We demonstrate that this approach can learn
solutions to non-local Fokker-Planck equations, such as those arising in
systems driven by both Brownian and L\'evy noise. We present examples with two-
and three-dimensional, uni- and multimodal distributions to validate the
method.
Related papers
- A Stein Gradient Descent Approach for Doubly Intractable Distributions [5.63014864822787]
We propose a novel Monte Carlo Stein variational gradient descent (MC-SVGD) approach for inference for doubly intractable distributions.
The proposed method achieves substantial computational gains over existing algorithms, while providing comparable inferential performance for the posterior distributions.
arXiv Detail & Related papers (2024-10-28T13:42:27Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Approximating a RUM from Distributions on k-Slates [88.32814292632675]
We find a generalization-time algorithm that finds the RUM that best approximates the given distribution on average.
Our theoretical result can also be made practical: we obtain a that is effective and scales to real-world datasets.
arXiv Detail & Related papers (2023-05-22T17:43:34Z) - Approximate sampling and estimation of partition functions using neural
networks [0.0]
We show how variational autoencoders (VAEs) can be applied to this task.
We invert the logic and train the VAE to fit a simple and tractable distribution, on the assumption of a complex and intractable latent distribution, specified up to normalization.
This procedure constructs approximations without the use of training data or Markov chain Monte Carlo sampling.
arXiv Detail & Related papers (2022-09-21T15:16:45Z) - Probability flow solution of the Fokker-Planck equation [10.484851004093919]
We introduce an alternative scheme based on integrating an ordinary differential equation that describes the flow of probability.
Unlike the dynamics, this equation deterministically pushes samples from the initial density onto samples from the solution at any later time.
Our approach is based on recent advances in score-based diffusion for generative modeling.
arXiv Detail & Related papers (2022-06-09T17:37:09Z) - GANs as Gradient Flows that Converge [3.8707695363745223]
We show that along the gradient flow induced by a distribution-dependent ordinary differential equation, the unknown data distribution emerges as the long-time limit.
The simulation of the ODE is shown equivalent to the training of generative networks (GANs)
This equivalence provides a new "cooperative" view of GANs and, more importantly, sheds new light on the divergence of GANs.
arXiv Detail & Related papers (2022-05-05T20:29:13Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.