Variational Inference with Locally Enhanced Bounds for Hierarchical
Models
- URL: http://arxiv.org/abs/2203.04432v1
- Date: Tue, 8 Mar 2022 22:53:43 GMT
- Title: Variational Inference with Locally Enhanced Bounds for Hierarchical
Models
- Authors: Tomas Geffner and Justin Domke
- Abstract summary: We propose a new family of variational bounds for hierarchical models based on the application of tightening methods.
We show that our approach naturally allows the use of subsampling to get unbiased gradients, and that it fully leverages the power of methods that build tighter lower bounds.
- Score: 38.73307745906571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hierarchical models represent a challenging setting for inference algorithms.
MCMC methods struggle to scale to large models with many local variables and
observations, and variational inference (VI) may fail to provide accurate
approximations due to the use of simple variational families. Some variational
methods (e.g. importance weighted VI) integrate Monte Carlo methods to give
better accuracy, but these tend to be unsuitable for hierarchical models, as
they do not allow for subsampling and their performance tends to degrade for
high dimensional models. We propose a new family of variational bounds for
hierarchical models, based on the application of tightening methods (e.g.
importance weighting) separately for each group of local random variables. We
show that our approach naturally allows the use of subsampling to get unbiased
gradients, and that it fully leverages the power of methods that build tighter
lower bounds by applying them independently in lower dimensional spaces,
leading to better results and more accurate posterior approximations than
relevant baselines.
Related papers
- Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Free-Form Variational Inference for Gaussian Process State-Space Models [21.644570034208506]
We propose a new method for inference in Bayesian GPSSMs.
Our method is based on freeform variational inference via inducing Hamiltonian Monte Carlo.
We show that our approach can learn transition dynamics and latent states more accurately than competing methods.
arXiv Detail & Related papers (2023-02-20T11:34:16Z) - PAC Generalization via Invariant Representations [41.02828564338047]
We consider the notion of $epsilon$-approximate invariance in a finite sample setting.
Inspired by PAC learning, we obtain finite-sample out-of-distribution generalization guarantees.
Our results show bounds that do not scale in ambient dimension when intervention sites are restricted to lie in a constant size subset of in-degree bounded nodes.
arXiv Detail & Related papers (2022-05-30T15:50:14Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Amortized Variational Inference for Simple Hierarchical Models [37.56550107432323]
It is difficult to use subsampling with variational inference in hierarchical models since the number of local latent variables scales with the dataset.
This paper suggests an amortized approach where shared parameters simultaneously represent all local distributions.
It is also dramatically faster than using a structured variational distribution.
arXiv Detail & Related papers (2021-11-04T20:29:12Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Stochastic spectral embedding [0.0]
We propose a novel sequential adaptive surrogate modeling method based on "stochastic spectral embedding" (SSE)
We show how the method compares favorably against state-of-the-art sparse chaos expansions on a set of models with different complexity and input dimension.
arXiv Detail & Related papers (2020-04-09T11:00:07Z) - CATVI: Conditional and Adaptively Truncated Variational Inference for
Hierarchical Bayesian Nonparametric Models [0.0]
We propose the conditional and adaptively truncated variational inference method (CATVI)
CATVI enjoys several advantages over traditional methods, including a smaller divergence between variational and true posteriors.
Empirical studies on three large datasets reveal that CATVI applied in Bayesian nonparametric topic models substantially outperforms competing models.
arXiv Detail & Related papers (2020-01-13T19:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.