Uncertainty in Machine Learning
- URL: http://arxiv.org/abs/2510.06007v1
- Date: Tue, 07 Oct 2025 15:07:27 GMT
- Title: Uncertainty in Machine Learning
- Authors: Hans Weytjens, Wouter Verbeke,
- Abstract summary: This book chapter introduces the principles and practical applications of uncertainty quantification in machine learning.<n>It explains how to identify and distinguish between different types of uncertainty and presents methods for quantifying uncertainty in predictive models.
- Score: 3.3087439644066876
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This book chapter introduces the principles and practical applications of uncertainty quantification in machine learning. It explains how to identify and distinguish between different types of uncertainty and presents methods for quantifying uncertainty in predictive models, including linear regression, random forests, and neural networks. The chapter also covers conformal prediction as a framework for generating predictions with predefined confidence intervals. Finally, it explores how uncertainty estimation can be leveraged to improve business decision-making, enhance model reliability, and support risk-aware strategies.
Related papers
- An Axiomatic Assessment of Entropy- and Variance-based Uncertainty Quantification in Regression [26.822418248900547]
We introduce a set of axioms to rigorously assess uncertainty measures in supervised regression.<n>We generalize commonly used approaches for uncertainty representation and corresponding uncertainty measures.<n>Our findings offer theoretical insights and practical guidelines for reliable uncertainty assessment.
arXiv Detail & Related papers (2025-04-25T15:44:46Z) - Probabilistic Modeling of Disparity Uncertainty for Robust and Efficient Stereo Matching [61.73532883992135]
We propose a new uncertainty-aware stereo matching framework.<n>We adopt Bayes risk as the measurement of uncertainty and use it to separately estimate data and model uncertainty.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.<n>We propose methods tailored to the unique properties of perception and decision-making.<n>We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Model-free generalized fiducial inference [0.0]
Conformal prediction (CP) was developed to provide finite-sample probabilistic prediction guarantees.<n>CP algorithms are a relatively general-purpose approach to uncertainty quantification, with finite-sample guarantees, they lack versatility.<n>In this paper, tools are offered from imprecise probability theory to build a formal connection between CP and generalized fiducial (GF) inference.
arXiv Detail & Related papers (2023-07-24T01:58:48Z) - Quantifying Deep Learning Model Uncertainty in Conformal Prediction [1.4685355149711297]
Conformal Prediction is a promising framework for representing the model uncertainty.
In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations.
arXiv Detail & Related papers (2023-06-01T16:37:50Z) - Integrating Uncertainty into Neural Network-based Speech Enhancement [27.868722093985006]
Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech.
This leads to a single estimate for each input without any guarantees or measures of reliability.
We study the benefits of modeling uncertainty in clean speech estimation.
arXiv Detail & Related papers (2023-05-15T15:55:12Z) - Calibrated Regression Against An Adversary Without Regret [10.470326550507117]
We introduce online algorithms guaranteed to achieve these goals on arbitrary streams of data points.
Specifically, our algorithms produce forecasts that are (1) calibrated -- i.e., an 80% confidence interval contains the true outcome 80% of the time.
We implement a post-hoc recalibration strategy that provably achieves these goals in regression.
arXiv Detail & Related papers (2023-02-23T17:42:11Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.