Reversible Gromov-Monge Sampler for Simulation-Based Inference
- URL: http://arxiv.org/abs/2109.14090v1
- Date: Tue, 28 Sep 2021 23:09:24 GMT
- Title: Reversible Gromov-Monge Sampler for Simulation-Based Inference
- Authors: YoonHaeng Hur, Wenxuan Guo, Tengyuan Liang
- Abstract summary: This paper introduces a new simulation-based inference procedure to model and sample from multi-dimensional probability distributions.
Motivated by the work of M'emoli (2011) and Sturm (2012) on distance and isomorphism between metric measure spaces, we propose a new notion called the Reversible Gromov-Monge (RGM) distance.
- Score: 3.725559762520257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a new simulation-based inference procedure to model and
sample from multi-dimensional probability distributions given access to i.i.d.
samples, circumventing usual approaches of explicitly modeling the density
function or designing Markov chain Monte Carlo. Motivated by the seminal work
of M\'emoli (2011) and Sturm (2012) on distance and isomorphism between metric
measure spaces, we propose a new notion called the Reversible Gromov-Monge
(RGM) distance and study how RGM can be used to design new transform samplers
in order to perform simulation-based inference. Our RGM sampler can also
estimate optimal alignments between two heterogenous metric measure spaces
$(\mathcal{X}, \mu, c_{\mathcal{X}})$ and $(\mathcal{Y}, \nu, c_{\mathcal{Y}})$
from empirical data sets, with estimated maps that approximately push forward
one measure $\mu$ to the other $\nu$, and vice versa. Analytic properties of
RGM distance are derived; statistical rate of convergence, representation, and
optimization questions regarding the induced sampler are studied. Synthetic and
real-world examples showcasing the effectiveness of the RGM sampler are also
demonstrated.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Approximate Bayesian inference from noisy likelihoods with Gaussian
process emulated MCMC [0.24275655667345403]
We model the log-likelihood function using a Gaussian process (GP)
The main methodological innovation is to apply this model to emulate the progression that an exact Metropolis-Hastings (MH) sampler would take.
The resulting approximate sampler is conceptually simple and sample-efficient.
arXiv Detail & Related papers (2021-04-08T17:38:02Z) - Gaussian Function On Response Surface Estimation [12.35564140065216]
We propose a new framework for interpreting (features and samples) black-box machine learning models via a metamodeling technique.
The metamodel can be estimated from data generated via a trained complex model by running the computer experiment on samples of data in the region of interest.
arXiv Detail & Related papers (2021-01-04T04:47:00Z) - Score Matched Conditional Exponential Families for Likelihood-Free
Inference [0.0]
Likelihood-Free Inference (LFI) relies on simulations from the model.
We generate parameter-simulation pairs from the model independently on the observation.
We use Neural Networks whose weights are tuned with Score Matching to learn a conditional exponential family likelihood approximation.
arXiv Detail & Related papers (2020-12-20T11:57:30Z) - Isometric Gaussian Process Latent Variable Model for Dissimilarity Data [0.0]
We present a probabilistic model where the latent variable respects both the distances and the topology of the modeled data.
The model is inferred by variational inference based on observations of pairwise distances.
arXiv Detail & Related papers (2020-06-21T08:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.