PAVI: Plate-Amortized Variational Inference
- URL: http://arxiv.org/abs/2308.16022v1
- Date: Wed, 30 Aug 2023 13:22:20 GMT
- Title: PAVI: Plate-Amortized Variational Inference
- Authors: Louis Rouillard, Alexandre Le Bris, Thomas Moreau, Demian Wassermann
- Abstract summary: Inference is challenging for large population studies where millions of measurements are performed over a cohort of hundreds of subjects.
This large cardinality renders off-the-shelf Variational Inference (VI) computationally impractical.
In this work, we design structured VI families that efficiently tackle large population studies.
- Score: 55.975832957404556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given observed data and a probabilistic generative model, Bayesian inference
searches for the distribution of the model's parameters that could have yielded
the data. Inference is challenging for large population studies where millions
of measurements are performed over a cohort of hundreds of subjects, resulting
in a massive parameter space. This large cardinality renders off-the-shelf
Variational Inference (VI) computationally impractical.
In this work, we design structured VI families that efficiently tackle large
population studies. Our main idea is to share the parameterization and learning
across the different i.i.d. variables in a generative model, symbolized by the
model's \textit{plates}. We name this concept \textit{plate amortization}.
Contrary to off-the-shelf stochastic VI, which slows down inference, plate
amortization results in orders of magnitude faster to train variational
distributions.
Applied to large-scale hierarchical problems, PAVI yields expressive,
parsimoniously parameterized VI with an affordable training time. This faster
convergence effectively unlocks inference in those large regimes. We illustrate
the practical utility of PAVI through a challenging Neuroimaging example
featuring 400 million latent parameters, demonstrating a significant step
towards scalable and expressive Variational Inference.
Related papers
- Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling [22.256068524699472]
In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues.
We combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution.
Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
arXiv Detail & Related papers (2024-08-13T08:09:05Z) - Amortized Variational Inference: When and Why? [17.1222896154385]
Amortized variational inference (A-VI) learns a common inference function, which maps each observation to its corresponding latent variable's approximate posterior.
We derive conditions on a latent variable model which are necessary, sufficient, and verifiable under which A-VI can attain F-VI's optimal solution.
arXiv Detail & Related papers (2023-07-20T16:45:22Z) - Evidence Networks: simple losses for fast, amortized, neural Bayesian
model comparison [0.0]
Evidence Networks can enable Bayesian model comparison when state-of-the-art methods fail.
We introduce the leaky parity-odd power transform, leading to the novel l-POP-Exponential'' loss function.
We show that Evidence Networks are explicitly independent of dimensionality of the parameter space and scale mildly with the complexity of the posterior probability density function.
arXiv Detail & Related papers (2023-05-18T18:14:53Z) - PAVI: Plate-Amortized Variational Inference [0.0]
Variational Inference is challenging for large population studies where thousands of measurements are performed over a cohort of hundreds of subjects.
In this work, we design structured VI families that can efficiently tackle large population studies.
We name this concept plate amortization, and illustrate the powerful synergies it entitles, resulting in expressive, parsimoniously parameterized and orders of magnitude faster to train large scale hierarchical variational distributions.
arXiv Detail & Related papers (2022-06-10T13:55:19Z) - Flexible Amortized Variational Inference in qBOLD MRI [56.4324135502282]
Oxygen extraction fraction (OEF) and deoxygenated blood volume (DBV) are more ambiguously determined from the data.
Existing inference methods tend to yield very noisy and underestimated OEF maps, while overestimating DBV.
This work describes a novel probabilistic machine learning approach that can infer plausible distributions of OEF and DBV.
arXiv Detail & Related papers (2022-03-11T10:47:16Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.