Posterior Temperature Optimization in Variational Inference
- URL: http://arxiv.org/abs/2106.07533v1
- Date: Fri, 11 Jun 2021 13:01:28 GMT
- Title: Posterior Temperature Optimization in Variational Inference
- Authors: Max-Heinrich Laves, Malte T\"olle, Alexander Schlaefer
- Abstract summary: Cold posteriors have been reported to perform better in practice in the context of deep learning.
In this work, we first derive the ELBO for a fully tempered posterior in mean-field variational inference.
We then use Bayesian optimization to automatically find the optimal posterior temperature.
- Score: 69.50862982117127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cold posteriors have been reported to perform better in practice in the
context of Bayesian deep learning (Wenzel2020 et al., 2020). In variational
inference, it is common to employ only a partially tempered posterior by
scaling the complexity term in the log-evidence lower bound (ELBO). In this
work, we first derive the ELBO for a fully tempered posterior in mean-field
variational inference and subsequently use Bayesian optimization to
automatically find the optimal posterior temperature. Choosing an appropriate
posterior temperature leads to better predictive performance and improved
uncertainty calibration, which we demonstrate for the task of denoising medical
X-ray images.
Related papers
- Predictive variational inference: Learn the predictively optimal posterior distribution [1.7648680700685022]
Vanilla variational inference finds an optimal approximation to the Bayesian posterior distribution, but even the exact Bayesian posterior is often not meaningful under model misspecification.
We propose predictive variational inference (PVI): a general inference framework that seeks and samples from an optimal posterior density.
This framework applies to both likelihood-exact and likelihood-free models.
arXiv Detail & Related papers (2024-10-18T19:44:57Z) - Temperature Optimization for Bayesian Deep Learning [9.610060788662972]
We propose a data-driven approach to select the temperature that maximizes test log-predictive density.
We empirically demonstrate that our method performs comparably to grid search, at a fraction of the cost.
arXiv Detail & Related papers (2024-10-08T07:32:22Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - The fine print on tempered posteriors [4.503508912578133]
We conduct a detailed investigation of tempered posteriors and uncover a number of crucial and previously unspecified points.
Contrary to previous works, we finally show through a PAC-Bayesian analysis that the temperature $lambda$ cannot be seen as simply fixing a misdiscussed prior or likelihood.
arXiv Detail & Related papers (2023-09-11T08:21:42Z) - Sample-dependent Adaptive Temperature Scaling for Improved Calibration [95.7477042886242]
Post-hoc approach to compensate for neural networks being wrong is to perform temperature scaling.
We propose to predict a different temperature value for each input, allowing us to adjust the mismatch between confidence and accuracy.
We test our method on the ResNet50 and WideResNet28-10 architectures using the CIFAR10/100 and Tiny-ImageNet datasets.
arXiv Detail & Related papers (2022-07-13T14:13:49Z) - Sample-Efficient Optimisation with Probabilistic Transformer Surrogates [66.98962321504085]
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian optimisation.
We observe two drawbacks stemming from their training procedure and loss definition, hindering their direct deployment as proxies in black-box optimisation.
We introduce two components: 1) a BO-tailored training prior supporting non-uniformly distributed points, and 2) a novel approximate posterior regulariser trading-off accuracy and input sensitivity to filter favourable stationary points for improved predictive performance.
arXiv Detail & Related papers (2022-05-27T11:13:17Z) - Posterior temperature optimized Bayesian models for inverse problems in
medical imaging [59.82184400837329]
We present an unsupervised Bayesian approach to inverse problems in medical imaging using mean-field variational inference with a fully tempered posterior.
We show that an optimized posterior temperature leads to improved accuracy and uncertainty estimation.
Our source code is publicly available at calibrated.com/Cardio-AI/mfvi-dip-mia.
arXiv Detail & Related papers (2022-02-02T12:16:33Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Cold Posteriors and Aleatoric Uncertainty [32.341379426923105]
Recent work has observed that one can outperform exact inference in Bayesian neural networks by tuning the "temperature" of the posterior on a validation set.
We argue that commonly used priors can significantly overestimate the aleatoric uncertainty in the labels on many classification datasets.
arXiv Detail & Related papers (2020-07-31T18:37:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.