A framework for benchmarking uncertainty in deep regression
- URL: http://arxiv.org/abs/2109.09048v1
- Date: Fri, 10 Sep 2021 13:22:28 GMT
- Title: A framework for benchmarking uncertainty in deep regression
- Authors: Franko Schm\"ahling, J\"org Martin, Clemens Elster
- Abstract summary: We propose a framework for the assessment of uncertainty quantification in deep regression.
Results of an uncertainty quantification for deep regression are compared against those obtained by a statistical reference method.
- Score: 0.618778092044887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a framework for the assessment of uncertainty quantification in
deep regression. The framework is based on regression problems where the
regression function is a linear combination of nonlinear functions. Basically,
any level of complexity can be realized through the choice of the nonlinear
functions and the dimensionality of their domain. Results of an uncertainty
quantification for deep regression are compared against those obtained by a
statistical reference method. The reference method utilizes knowledge of the
underlying nonlinear functions and is based on a Bayesian linear regression
using a reference prior. Reliability of uncertainty quantification is assessed
in terms of coverage probabilities, and accuracy through the size of calculated
uncertainties. We illustrate the proposed framework by applying it to current
approaches for uncertainty quantification in deep regression. The flexibility,
together with the availability of a reference solution, makes the framework
suitable for defining benchmark sets for uncertainty quantification.
Related papers
- An Axiomatic Assessment of Entropy- and Variance-based Uncertainty Quantification in Regression [26.822418248900547]
We introduce a set of axioms to rigorously assess measures of aleatoric, epistemic, and total uncertainty in supervised regression.
We analyze the widely used entropy- and variance-based measures regarding limitations and challenges.
arXiv Detail & Related papers (2025-04-25T15:44:46Z) - RieszBoost: Gradient Boosting for Riesz Regression [49.737777802061984]
We propose a novel gradient boosting algorithm to directly estimate the Riesz representer without requiring its explicit analytical form.
We show that our algorithm performs on par with or better than indirect estimation techniques across a range of functionals.
arXiv Detail & Related papers (2025-01-08T23:04:32Z) - Uncertainty separation via ensemble quantile regression [23.667247644930708]
This paper introduces a novel and scalable framework for uncertainty estimation and separation.
Our framework is scalable to large datasets and demonstrates superior performance on synthetic benchmarks.
arXiv Detail & Related papers (2024-12-18T11:15:32Z) - A variational Bayes approach to debiased inference for low-dimensional parameters in high-dimensional linear regression [2.7498981662768536]
We propose a scalable variational Bayes method for statistical inference in sparse linear regression.
Our approach relies on assigning a mean-field approximation to the nuisance coordinates.
This requires only a preprocessing step and preserves the computational advantages of mean-field variational Bayes.
arXiv Detail & Related papers (2024-06-18T14:27:44Z) - Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - Robust Regression over Averaged Uncertainty [7.4489490661717355]
We show that this formulation recovers ridge regression exactly and establishes the missing link between robust optimization and the mean squared error approaches for existing regression problems.
We provide exact, closed-form, in some cases, analytical solutions to the equivalent regularization strength under uncertainty sets induced by $ell_p$ norm, Schatten $p$-norm, and general polytopes.
arXiv Detail & Related papers (2023-11-12T20:57:30Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Beta quantile regression for robust estimation of uncertainty in the
presence of outliers [1.6377726761463862]
Quantile Regression can be used to estimate aleatoric uncertainty in deep neural networks.
We propose a robust solution for quantile regression that incorporates concepts from robust divergence.
arXiv Detail & Related papers (2023-09-14T01:18:57Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Benign overfitting and adaptive nonparametric regression [71.70323672531606]
We construct an estimator which is a continuous function interpolating the data points with high probability.
We attain minimax optimal rates under mean squared risk on the scale of H"older classes adaptively to the unknown smoothness.
arXiv Detail & Related papers (2022-06-27T14:50:14Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.