PAVI: Plate-Amortized Variational Inference
- URL: http://arxiv.org/abs/2206.05111v1
- Date: Fri, 10 Jun 2022 13:55:19 GMT
- Title: PAVI: Plate-Amortized Variational Inference
- Authors: Louis Rouillard (PARIETAL, Inria), Thomas Moreau (PARIETAL), Demian
Wassermann (PARIETAL)
- Abstract summary: Variational Inference is challenging for large population studies where thousands of measurements are performed over a cohort of hundreds of subjects.
In this work, we design structured VI families that can efficiently tackle large population studies.
We name this concept plate amortization, and illustrate the powerful synergies it entitles, resulting in expressive, parsimoniously parameterized and orders of magnitude faster to train large scale hierarchical variational distributions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given some observed data and a probabilistic generative model, Bayesian
inference aims at obtaining the distribution of a model's latent parameters
that could have yielded the data. This task is challenging for large population
studies where thousands of measurements are performed over a cohort of hundreds
of subjects, resulting in a massive latent parameter space. This large
cardinality renders off-the-shelf Variational Inference (VI) computationally
impractical. In this work, we design structured VI families that can
efficiently tackle large population studies. To this end, our main idea is to
share the parameterization and learning across the different i.i.d. variables
in a generative model -symbolized by the model's plates. We name this concept
plate amortization, and illustrate the powerful synergies it entitles,
resulting in expressive, parsimoniously parameterized and orders of magnitude
faster to train large scale hierarchical variational distributions. We
illustrate the practical utility of PAVI through a challenging Neuroimaging
example featuring a million latent parameters, demonstrating a significant step
towards scalable and expressive Variational Inference.
Related papers
- Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling [22.256068524699472]
In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues.
We combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution.
Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
arXiv Detail & Related papers (2024-08-13T08:09:05Z) - Revealing Multimodal Contrastive Representation Learning through Latent
Partial Causal Models [85.67870425656368]
We introduce a unified causal model specifically designed for multimodal data.
We show that multimodal contrastive representation learning excels at identifying latent coupled variables.
Experiments demonstrate the robustness of our findings, even when the assumptions are violated.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - Forward $\chi^2$ Divergence Based Variational Importance Sampling [2.841087763205822]
We introduce a novel variational importance sampling (VIS) approach that directly estimates and maximizes the log-likelihood.
We apply VIS to various popular latent variable models, including mixture models, variational auto-encoders, and partially observable generalized linear models.
arXiv Detail & Related papers (2023-11-04T21:46:28Z) - PAVI: Plate-Amortized Variational Inference [55.975832957404556]
Inference is challenging for large population studies where millions of measurements are performed over a cohort of hundreds of subjects.
This large cardinality renders off-the-shelf Variational Inference (VI) computationally impractical.
In this work, we design structured VI families that efficiently tackle large population studies.
arXiv Detail & Related papers (2023-08-30T13:22:20Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Learning Invariances for Interpretability using Supervised VAE [0.0]
We learn model invariances as a means of interpreting a model.
We propose a supervised form of variational auto-encoders (VAEs)
We show how combining our model with feature attribution methods it is possible to reach a more fine-grained understanding about the decision process of the model.
arXiv Detail & Related papers (2020-07-15T10:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.