Meta-Uncertainty in Bayesian Model Comparison
- URL: http://arxiv.org/abs/2210.07278v1
- Date: Thu, 13 Oct 2022 18:10:29 GMT
- Title: Meta-Uncertainty in Bayesian Model Comparison
- Authors: Marvin Schmitt, Stefan T. Radev and Paul-Christian B\"urkner
- Abstract summary: We propose a fully probabilistic framework for quantifying meta-uncertainty.
We demonstrate the utility of the proposed method in the context of conjugate Bayesian regression, likelihood-based inference with Markov chain Monte Carlo, and simulation-based inference with neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bayesian model comparison (BMC) offers a principled probabilistic approach to
study and rank competing models. In standard BMC, we construct a discrete
probability distribution over the set of possible models, conditional on the
observed data of interest. These posterior model probabilities (PMPs) are
measures of uncertainty, but, when derived from a finite number of
observations, are also uncertain themselves. In this paper, we conceptualize
distinct levels of uncertainty which arise in BMC. We explore a fully
probabilistic framework for quantifying meta-uncertainty, resulting in an
applied method to enhance any BMC workflow. Drawing on both Bayesian and
frequentist techniques, we represent the uncertainty over the uncertain PMPs
via meta-models which combine simulated and observed data into a predictive
distribution for PMPs on new data. We demonstrate the utility of the proposed
method in the context of conjugate Bayesian regression, likelihood-based
inference with Markov chain Monte Carlo, and simulation-based inference with
neural networks.
Related papers
- Bayesian meta learning for trustworthy uncertainty quantification [3.683202928838613]
We propose, Trust-Bayes, a novel optimization framework for Bayesian meta learning.
We characterize the lower bounds of the probabilities of the ground truth being captured by the specified intervals.
We analyze the sample complexity with respect to the feasible probability for trustworthy uncertainty quantification.
arXiv Detail & Related papers (2024-07-27T15:56:12Z) - Posterior Uncertainty Quantification in Neural Networks using Data Augmentation [3.9860047080844807]
We show that deep ensembling is a fundamentally mis-specified model class, since it assumes that future data are supported on existing observations only.
We propose MixupMP, a method that constructs a more realistic predictive distribution using popular data augmentation techniques.
Our empirical analysis showcases that MixupMP achieves superior predictive performance and uncertainty quantification on various image classification datasets.
arXiv Detail & Related papers (2024-03-18T17:46:07Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - Piecewise Deterministic Markov Processes for Bayesian Neural Networks [20.865775626533434]
Inference on modern Bayesian Neural Networks (BNNs) often relies on a variational inference treatment, imposing violated assumptions of independence and the form of the posterior.
New Piecewise Deterministic Markov Process (PDMP) samplers permit subsampling, though introduce a model specific inhomogenous Poisson Process (IPPs) which is difficult to sample from.
This work introduces a new generic and adaptive thinning scheme for sampling from IPPs, and demonstrates how this approach can accelerate the application of PDMPs for inference in BNNs.
arXiv Detail & Related papers (2023-02-17T06:38:16Z) - WeatherBench Probability: A benchmark dataset for probabilistic
medium-range weather forecasting along with deep learning baseline models [22.435002906710803]
WeatherBench is a benchmark dataset for medium-range weather forecasting of geopotential, temperature and precipitation.
WeatherBench Probability extends this to probabilistic forecasting by adding a set of established probabilistic verification metrics.
arXiv Detail & Related papers (2022-05-02T12:49:05Z) - Out of Distribution Detection, Generalization, and Robustness Triangle
with Maximum Probability Theorem [2.0654955576087084]
MPT uses the probability distribution that the models assume on random variables to provide an upper bound on probability of the model.
We apply MPT to challenging out-of-distribution (OOD) detection problems in computer vision by incorporating MPT as a regularization scheme in training of CNNs and their energy based variants.
arXiv Detail & Related papers (2022-03-23T02:42:08Z) - PSD Representations for Effective Probability Models [117.35298398434628]
We show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end.
We characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees.
Our results open the way to applications of PSD models to density estimation, decision theory and inference.
arXiv Detail & Related papers (2021-06-30T15:13:39Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.