Variational Prediction
- URL: http://arxiv.org/abs/2307.07568v1
- Date: Fri, 14 Jul 2023 18:19:31 GMT
- Title: Variational Prediction
- Authors: Alexander A. Alemi and Ben Poole
- Abstract summary: We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
- Score: 95.00085314353436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian inference offers benefits over maximum likelihood, but it also comes
with computational costs. Computing the posterior is typically intractable, as
is marginalizing that posterior to form the posterior predictive distribution.
In this paper, we present variational prediction, a technique for directly
learning a variational approximation to the posterior predictive distribution
using a variational bound. This approach can provide good predictive
distributions without test time marginalization costs. We demonstrate
Variational Prediction on an illustrative toy example.
Related papers
- Progression: an extrapolation principle for regression [0.0]
We propose a novel statistical extrapolation principle.
It assumes a simple relationship between predictors and the response at the boundary of the training predictor samples.
Our semi-parametric method, progression, leverages this extrapolation principle and offers guarantees on the approximation error beyond the training data range.
arXiv Detail & Related papers (2024-10-30T17:29:51Z) - Predictive variational inference: Learn the predictively optimal posterior distribution [1.7648680700685022]
Vanilla variational inference finds an optimal approximation to the Bayesian posterior distribution, but even the exact Bayesian posterior is often not meaningful under model misspecification.
We propose predictive variational inference (PVI): a general inference framework that seeks and samples from an optimal posterior density.
This framework applies to both likelihood-exact and likelihood-free models.
arXiv Detail & Related papers (2024-10-18T19:44:57Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Invariant Probabilistic Prediction [45.90606906307022]
We show that arbitrary distribution shifts do not, in general, admit invariant and robust probabilistic predictions.
We propose a method to yield invariant probabilistic predictions, called IPP, and study the consistency of the underlying parameters.
arXiv Detail & Related papers (2023-09-18T18:50:24Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - Non-Volatile Memory Accelerated Posterior Estimation [3.4256231429537936]
Current machine learning models use only a single learnable parameter combination when making predictions.
We show that through the use of high-capacity persistent storage, models whose posterior distribution was too big to approximate are now feasible.
arXiv Detail & Related papers (2022-02-21T20:25:57Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Short-term prediction of Time Series based on bounding techniques [0.0]
This paper is reconsidered the prediction problem in time series framework by using a new non-parametric approach.
The innovation is to consider both deterministic and deterministic-stochastic assumptions in order to obtain the upper bound of the prediction error.
A benchmark is included to illustrate that the proposed predictor can obtain suitable results in a prediction scheme, and can be an interesting alternative method to the classical non-parametric methods.
arXiv Detail & Related papers (2021-01-26T11:27:36Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.