Model-free generalized fiducial inference
- URL: http://arxiv.org/abs/2307.12472v1
- Date: Mon, 24 Jul 2023 01:58:48 GMT
- Title: Model-free generalized fiducial inference
- Authors: Jonathan P Williams
- Abstract summary: I propose and develop ideas for a model-free statistical framework for imprecise probabilistic prediction inference.
This framework facilitates uncertainty quantification in the form of prediction sets that offer finite sample control of type 1 errors.
I consider the theoretical and empirical properties of a precise probabilistic approximation to the model-free imprecise framework.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by the need for the development of safe and reliable methods for
uncertainty quantification in machine learning, I propose and develop ideas for
a model-free statistical framework for imprecise probabilistic prediction
inference. This framework facilitates uncertainty quantification in the form of
prediction sets that offer finite sample control of type 1 errors, a property
shared with conformal prediction sets, but this new approach also offers more
versatile tools for imprecise probabilistic reasoning. Furthermore, I propose
and consider the theoretical and empirical properties of a precise
probabilistic approximation to the model-free imprecise framework.
Approximating a belief/plausibility measure pair by an [optimal in some sense]
probability measure in the credal set is a critical resolution needed for the
broader adoption of imprecise probabilistic approaches to inference in
statistical and machine learning communities. It is largely undetermined in the
statistical and machine learning literatures, more generally, how to properly
quantify uncertainty in that there is no generally accepted standard of
accountability of stated uncertainties. The research I present in this
manuscript is aimed at motivating a framework for statistical inference with
reliability and accountability as the guiding principles.
Related papers
- On Information-Theoretic Measures of Predictive Uncertainty [5.8034373350518775]
Despite its significance, a consensus on the correct measurement of predictive uncertainty remains elusive.
Our proposed framework categorizes predictive uncertainty measures according to two factors: (I) The predicting model (II) The approximation of the true predictive distribution.
We empirically evaluate these measures in typical uncertainty estimation settings, such as misclassification detection, selective prediction, and out-of-distribution detection.
arXiv Detail & Related papers (2024-10-14T17:52:18Z) - Introducing an Improved Information-Theoretic Measure of Predictive
Uncertainty [6.3398383724486544]
Predictive uncertainty is commonly measured by the entropy of the Bayesian model average (BMA) predictive distribution.
We introduce a theoretically grounded measure to overcome these limitations.
We find that our introduced measure behaves more reasonably in controlled synthetic tasks.
arXiv Detail & Related papers (2023-11-14T16:55:12Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Quantifying Deep Learning Model Uncertainty in Conformal Prediction [1.4685355149711297]
Conformal Prediction is a promising framework for representing the model uncertainty.
In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations.
arXiv Detail & Related papers (2023-06-01T16:37:50Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.