Model-free generalized fiducial inference
- URL: http://arxiv.org/abs/2307.12472v2
- Date: Mon, 06 Oct 2025 22:25:32 GMT
- Title: Model-free generalized fiducial inference
- Authors: Jonathan P Williams,
- Abstract summary: Conformal prediction (CP) was developed to provide finite-sample probabilistic prediction guarantees.<n>CP algorithms are a relatively general-purpose approach to uncertainty quantification, with finite-sample guarantees, they lack versatility.<n>In this paper, tools are offered from imprecise probability theory to build a formal connection between CP and generalized fiducial (GF) inference.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conformal prediction (CP) was developed to provide finite-sample probabilistic prediction guarantees. While CP algorithms are a relatively general-purpose approach to uncertainty quantification, with finite-sample guarantees, they lack versatility. Namely, the CP approach does not {\em prescribe} how to quantify the degree to which a data set provides evidence in support of (or against) an arbitrary event from a general class of events. In this paper, tools are offered from imprecise probability theory to build a formal connection between CP and generalized fiducial (GF) inference. These new insights establish a more general inferential lens from which CP can be understood, and demonstrate the pragmatism of fiducial ideas. The formal connection establishes a context in which epistemically-derived GF probability matches aleatoric/frequentist probability. Beyond this fact, it is illustrated how tools from imprecise probability theory, namely lower and upper probability functions, can be applied in the context of the imprecise GF distribution to provide posterior-like, prescriptive inference that is not possible within the CP framework alone. In addition to the primary CP generalization that is contributed, fundamental connections are synthesized between this new model-free GF and three other areas of contemporary research: nonparametric predictive inference (NPI), conformal predictive systems/distributions, and inferential models (IMs).
Related papers
- eCP: Informative uncertainty quantification via Equivariantized Conformal Prediction with pre-trained models [3.1424353049227727]
We study the effect of group symmetrization of pre-trained models on conformal prediction (CP)<n>We propose infusing CP with geometric information via group-averaging of the pretrained predictor to distribute the non-conformity mass across the orbits.<n>Our approach provably yields contracted non-conformity scores in increasing convex order, implying improved exponential-tail bounds and sharper conformal prediction sets in expectation.
arXiv Detail & Related papers (2026-02-03T20:18:59Z) - Quantifying Epistemic Predictive Uncertainty in Conformal Prediction [11.09458914721516]
We study the problem of quantifying uncertainty faced at prediction time due to the existence of multiple plausible predictive models.<n>We build on recent results showing that, under a mild assumption, any full conformal prediction procedure induces a set of closed and convex predictive distributions.<n>We propose a computationally efficient and analytically tractable uncertainty measure, based on emphMaximum Mean Imprecision, to quantify the uncertainty.
arXiv Detail & Related papers (2026-02-02T05:38:07Z) - Credal Prediction based on Relative Likelihood [24.307076055306148]
We propose a theoretically grounded approach to credal prediction based on the statistical notion of relative likelihood.<n>We tackle the problem of approximating credal sets defined in this way by means of suitably modified ensemble learning techniques.
arXiv Detail & Related papers (2025-05-28T13:20:20Z) - Probabilistic Modeling of Disparity Uncertainty for Robust and Efficient Stereo Matching [61.73532883992135]
We propose a new uncertainty-aware stereo matching framework.
We adopt Bayes risk as the measurement of uncertainty and use it to separately estimate data and model uncertainty.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Generative Conformal Prediction with Vectorized Non-Conformity Scores [6.059745771017814]
Conformal prediction provides model-agnostic uncertainty quantification with guaranteed coverage.
We propose a generative conformal prediction framework with vectorized non-conformity scores.
We construct adaptive uncertainty sets using density-ranked uncertainty balls.
arXiv Detail & Related papers (2024-10-17T16:37:03Z) - On Information-Theoretic Measures of Predictive Uncertainty [5.8034373350518775]
Despite its significance, a consensus on the correct measurement of predictive uncertainty remains elusive.
Our proposed framework categorizes predictive uncertainty measures according to two factors: (I) The predicting model (II) The approximation of the true predictive distribution.
We empirically evaluate these measures in typical uncertainty estimation settings, such as misclassification detection, selective prediction, and out-of-distribution detection.
arXiv Detail & Related papers (2024-10-14T17:52:18Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - An Information Theoretic Perspective on Conformal Prediction [15.194199235970242]
Conformal Prediction (CP) constructs prediction sets guaranteed to contain the true answer with a user-specified probability.<n>In this work, we leverage information theory to connect conformal prediction to other notions of uncertainty.
arXiv Detail & Related papers (2024-05-03T14:43:07Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - Introducing an Improved Information-Theoretic Measure of Predictive
Uncertainty [6.3398383724486544]
Predictive uncertainty is commonly measured by the entropy of the Bayesian model average (BMA) predictive distribution.
We introduce a theoretically grounded measure to overcome these limitations.
We find that our introduced measure behaves more reasonably in controlled synthetic tasks.
arXiv Detail & Related papers (2023-11-14T16:55:12Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Quantifying Deep Learning Model Uncertainty in Conformal Prediction [1.4685355149711297]
Conformal Prediction is a promising framework for representing the model uncertainty.
In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations.
arXiv Detail & Related papers (2023-06-01T16:37:50Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - A Confidence Machine for Sparse High-Order Interaction Model [16.780058676633914]
Conformal prediction (CP) is a promising approach for obtaining the confidence of prediction results with fewer theoretical assumptions.
We develop a full-CP of sparse high-order interaction model (SHIM) which is sufficiently flexible as it can take into account high-order interactions among variables.
arXiv Detail & Related papers (2022-05-28T03:23:56Z) - Non-Linear Spectral Dimensionality Reduction Under Uncertainty [107.01839211235583]
We propose a new dimensionality reduction framework, called NGEU, which leverages uncertainty information and directly extends several traditional approaches.
We show that the proposed NGEU formulation exhibits a global closed-form solution, and we analyze, based on the Rademacher complexity, how the underlying uncertainties theoretically affect the generalization ability of the framework.
arXiv Detail & Related papers (2022-02-09T19:01:33Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.