Surrogate Likelihoods for Variational Annealed Importance Sampling
- URL: http://arxiv.org/abs/2112.12194v1
- Date: Wed, 22 Dec 2021 19:49:45 GMT
- Title: Surrogate Likelihoods for Variational Annealed Importance Sampling
- Authors: Martin Jankowiak, Du Phan
- Abstract summary: We introduce a surrogate likelihood that can be learned jointly with other variational parameters.
We show that our method performs well in practice and that it is well-suited for black-box inference in probabilistic programming frameworks.
- Score: 11.144915453864854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational inference is a powerful paradigm for approximate Bayesian
inference with a number of appealing properties, including support for model
learning and data subsampling. By contrast MCMC methods like Hamiltonian Monte
Carlo do not share these properties but remain attractive since, contrary to
parametric methods, MCMC is asymptotically unbiased. For these reasons
researchers have sought to combine the strengths of both classes of algorithms,
with recent approaches coming closer to realizing this vision in practice.
However, supporting data subsampling in these hybrid methods can be a
challenge, a shortcoming that we address by introducing a surrogate likelihood
that can be learned jointly with other variational parameters. We argue
theoretically that the resulting algorithm permits the user to make an
intuitive trade-off between inference fidelity and computational cost. In an
extensive empirical comparison we show that our method performs well in
practice and that it is well-suited for black-box inference in probabilistic
programming frameworks.
Related papers
- Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Robust Inference of Dynamic Covariance Using Wishart Processes and Sequential Monte Carlo [2.6347238599620115]
We introduce a Sequential Monte Carlo (SMC) sampler for the Wishart process.
We show that SMC sampling results in the most robust estimates and out-of-sample predictions of dynamic covariance.
We demonstrate the practical applicability of our proposed approach on a dataset of clinical depression.
arXiv Detail & Related papers (2024-06-07T09:48:11Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Robust probabilistic inference via a constrained transport metric [8.85031165304586]
We offer a novel alternative by constructing an exponentially tilted empirical likelihood carefully designed to concentrate near a parametric family of distributions.
The proposed approach finds applications in a wide variety of robust inference problems, where we intend to perform inference on the parameters associated with the centering distribution.
We demonstrate superior performance of our methodology when compared against state-of-the-art robust Bayesian inference methods.
arXiv Detail & Related papers (2023-03-17T16:10:06Z) - Piecewise Deterministic Markov Processes for Bayesian Neural Networks [20.865775626533434]
Inference on modern Bayesian Neural Networks (BNNs) often relies on a variational inference treatment, imposing violated assumptions of independence and the form of the posterior.
New Piecewise Deterministic Markov Process (PDMP) samplers permit subsampling, though introduce a model specific inhomogenous Poisson Process (IPPs) which is difficult to sample from.
This work introduces a new generic and adaptive thinning scheme for sampling from IPPs, and demonstrates how this approach can accelerate the application of PDMPs for inference in BNNs.
arXiv Detail & Related papers (2023-02-17T06:38:16Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z) - Scaling Bayesian inference of mixed multinomial logit models to very
large datasets [9.442139459221785]
We propose an Amortized Variational Inference approach that leverages backpropagation, automatic differentiation and GPU-accelerated computation.
We show how normalizing flows can be used to increase the flexibility of the variational posterior approximations.
arXiv Detail & Related papers (2020-04-11T15:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.