Probabilistic Optimal Transport based on Collective Graphical Models
- URL: http://arxiv.org/abs/2006.08866v1
- Date: Tue, 16 Jun 2020 02:03:34 GMT
- Title: Probabilistic Optimal Transport based on Collective Graphical Models
- Authors: Yasunori Akagi, Yusuke Tanaka, Tomoharu Iwata, Takeshi Kurashima,
Hiroyuki Toda
- Abstract summary: Optimal Transport (OT) is a powerful tool for measuring the similarity between probability distributions and histograms.
We propose a new framework in which OT is considered as a maximum a posteriori (MAP) solution of a probabilistic generative model.
- Score: 38.49457447599772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimal Transport (OT) is being widely used in various fields such as machine
learning and computer vision, as it is a powerful tool for measuring the
similarity between probability distributions and histograms. In previous
studies, OT has been defined as the minimum cost to transport probability mass
from one probability distribution to another. In this study, we propose a new
framework in which OT is considered as a maximum a posteriori (MAP) solution of
a probabilistic generative model. With the proposed framework, we show that OT
with entropic regularization is equivalent to maximizing a posterior
probability of a probabilistic model called Collective Graphical Model (CGM),
which describes aggregated statistics of multiple samples generated from a
graphical model. Interpreting OT as a MAP solution of a CGM has the following
two advantages: (i) We can calculate the discrepancy between noisy histograms
by modeling noise distributions. Since various distributions can be used for
noise modeling, it is possible to select the noise distribution flexibly to
suit the situation. (ii) We can construct a new method for interpolation
between histograms, which is an important application of OT. The proposed
method allows for intuitive modeling based on the probabilistic
interpretations, and a simple and efficient estimation algorithm is available.
Experiments using synthetic and real-world spatio-temporal population datasets
show the effectiveness of the proposed interpolation method.
Related papers
- Bayesian Estimation and Tuning-Free Rank Detection for Probability Mass Function Tensors [17.640500920466984]
This paper presents a novel framework for estimating the joint PMF and automatically inferring its rank from observed data.
We derive a deterministic solution based on variational inference (VI) to approximate the posterior distributions of various model parameters. Additionally, we develop a scalable version of the VI-based approach by leveraging variational inference (SVI)
Experiments involving both synthetic data and real movie recommendation data illustrate the advantages of our VI and SVI-based methods in terms of estimation accuracy, automatic rank detection, and computational efficiency.
arXiv Detail & Related papers (2024-10-08T20:07:49Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Cheap and Deterministic Inference for Deep State-Space Models of
Interacting Dynamical Systems [38.23826389188657]
We present a deep state-space model which employs graph neural networks in order to model the underlying interacting dynamical system.
The predictive distribution is multimodal and has the form of a Gaussian mixture model, where the moments of the Gaussian components can be computed via deterministic moment matching rules.
Our moment matching scheme can be exploited for sample-free inference, leading to more efficient and stable training compared to Monte Carlo alternatives.
arXiv Detail & Related papers (2023-05-02T20:30:23Z) - Hierarchical Latent Structure for Multi-Modal Vehicle Trajectory
Forecasting [0.0]
We introduce a hierarchical latent structure into a VAE-based trajectory forecasting model.
Our model is capable of generating clear multi-modal trajectory distributions and outperforms the state-of-the-art (SOTA) models in terms of prediction accuracy.
arXiv Detail & Related papers (2022-07-11T04:52:28Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Information Theoretic Structured Generative Modeling [13.117829542251188]
A novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible.
The implementation employs a single neural network driven by an orthonormal input to a single white noise source adapted to learn an infinite Gaussian mixture model.
Preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as for training adversarial networks.
arXiv Detail & Related papers (2021-10-12T07:44:18Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.