A view on model misspecification in uncertainty quantification
- URL: http://arxiv.org/abs/2210.16938v2
- Date: Wed, 2 Nov 2022 07:51:28 GMT
- Title: A view on model misspecification in uncertainty quantification
- Authors: Yuko Kato, David M.J. Tax and Marco Loog
- Abstract summary: Estimating uncertainty of machine learning models is essential to assess the quality of the predictions that these models provide.
Model misspecification always exists as models are mere simplifications or approximations to reality.
This paper argues that model misspecification should receive more attention, by providing thought experiments and contextualizing these with relevant literature.
- Score: 17.17262672213263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating uncertainty of machine learning models is essential to assess the
quality of the predictions that these models provide. However, there are
several factors that influence the quality of uncertainty estimates, one of
which is the amount of model misspecification. Model misspecification always
exists as models are mere simplifications or approximations to reality. The
question arises whether the estimated uncertainty under model misspecification
is reliable or not. In this paper, we argue that model misspecification should
receive more attention, by providing thought experiments and contextualizing
these with relevant literature.
Related papers
- Uncertainty Quantification of Surrogate Models using Conformal Prediction [7.445864392018774]
We formalise a conformal prediction framework that satisfies predictions in a model-agnostic manner, requiring near-zero computational costs.
The paper looks at providing statistically valid error bars for deterministic models, as well as crafting guarantees to the error bars of probabilistic models.
arXiv Detail & Related papers (2024-08-19T10:46:19Z) - Parameter uncertainties for imperfect surrogate models in the low-noise regime [0.3069335774032178]
We analyze the generalization error of misspecified, near-deterministic surrogate models.
We show posterior distributions must cover every training point to avoid a divergent generalization error.
This is demonstrated on model problems before application to thousand dimensional datasets in atomistic machine learning.
arXiv Detail & Related papers (2024-02-02T11:41:21Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - The Interpolating Information Criterion for Overparameterized Models [49.283527214211446]
We show that the Interpolating Information Criterion is a measure of model quality that naturally incorporates the choice of prior into the model selection.
Our new information criterion accounts for prior misspecification, geometric and spectral properties of the model, and is numerically consistent with known empirical and theoretical behavior.
arXiv Detail & Related papers (2023-07-15T12:09:54Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Estimation and Model Misspecification: Fake and Missing Features [0.0]
We consider estimation under model misspecification where there is a mismatch between the underlying system and the model used during estimation.
We propose a model misspecification framework which enables a joint treatment of the model misspecification types of having fake and missing features.
arXiv Detail & Related papers (2022-03-07T13:50:15Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - A Tale Of Two Long Tails [4.970364068620608]
We identify examples the model is uncertain about and characterize the source of said uncertainty.
We investigate whether the rate of learning in the presence of additional information differs between atypical and noisy examples.
Our results show that well-designed interventions over the course of training can be an effective way to characterize and distinguish between different sources of uncertainty.
arXiv Detail & Related papers (2021-07-27T22:49:59Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - Considering discrepancy when calibrating a mechanistic electrophysiology
model [41.77362715012383]
Uncertainty quantification (UQ) is a vital step in using mathematical models and simulations to take decisions.
In this piece we draw attention to an important and under-addressed source of uncertainty in our predictions -- that of uncertainty in the model structure or the equations themselves.
arXiv Detail & Related papers (2020-01-13T13:26:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.