Beta quantile regression for robust estimation of uncertainty in the
presence of outliers
- URL: http://arxiv.org/abs/2309.07374v1
- Date: Thu, 14 Sep 2023 01:18:57 GMT
- Title: Beta quantile regression for robust estimation of uncertainty in the
presence of outliers
- Authors: Haleh Akrami, Omar Zamzam, Anand Joshi, Sergul Aydore, Richard Leahy
- Abstract summary: Quantile Regression can be used to estimate aleatoric uncertainty in deep neural networks.
We propose a robust solution for quantile regression that incorporates concepts from robust divergence.
- Score: 1.6377726761463862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantile Regression (QR) can be used to estimate aleatoric uncertainty in
deep neural networks and can generate prediction intervals. Quantifying
uncertainty is particularly important in critical applications such as clinical
diagnosis, where a realistic assessment of uncertainty is essential in
determining disease status and planning the appropriate treatment. The most
common application of quantile regression models is in cases where the
parametric likelihood cannot be specified. Although quantile regression is
quite robust to outlier response observations, it can be sensitive to outlier
covariate observations (features). Outlier features can compromise the
performance of deep learning regression problems such as style translation,
image reconstruction, and deep anomaly detection, potentially leading to
misleading conclusions. To address this problem, we propose a robust solution
for quantile regression that incorporates concepts from robust divergence. We
compare the performance of our proposed method with (i) least trimmed quantile
regression and (ii) robust regression based on the regularization of
case-specific parameters in a simple real dataset in the presence of outlier.
These methods have not been applied in a deep learning framework. We also
demonstrate the applicability of the proposed method by applying it to a
medical imaging translation task using diffusion models.
Related papers
- Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Quantifying predictive uncertainty of aphasia severity in stroke patients with sparse heteroscedastic Bayesian high-dimensional regression [47.1405366895538]
Sparse linear regression methods for high-dimensional data commonly assume that residuals have constant variance, which can be violated in practice.
This paper proposes estimating high-dimensional heteroscedastic linear regression models using a heteroscedastic partitioned empirical Bayes Expectation Conditional Maximization algorithm.
arXiv Detail & Related papers (2023-09-15T22:06:29Z) - Likelihood Annealing: Fast Calibrated Uncertainty for Regression [39.382916579076344]
This work presents a fast calibrated uncertainty estimation method for regression tasks called Likelihood Annealing.
Unlike previous methods for calibrated uncertainty in regression that focus only on low-dimensional regression problems, our method works well on a broad spectrum of regression problems, including high-dimensional regression.
arXiv Detail & Related papers (2023-02-21T21:24:35Z) - Deep Quantile Regression for Uncertainty Estimation in Unsupervised and
Supervised Lesion Detection [0.0]
Uncertainty is important in critical applications such as anomaly or lesion detection and clinical diagnosis.
In this work, we focus on using quantile regression to estimate aleatoric uncertainty and use it for estimating uncertainty in both supervised and unsupervised lesion detection problems.
We show how quantile regression can be used to characterize expert disagreement in the location of lesion boundaries.
arXiv Detail & Related papers (2021-09-20T08:50:21Z) - Recalibration of Aleatoric and Epistemic Regression Uncertainty in
Medical Imaging [2.126171264016785]
Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples.
$ sigma $ scaling is able to reliably recalibrate predictive uncertainty.
arXiv Detail & Related papers (2021-04-26T07:18:58Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Flexible Model Aggregation for Quantile Regression [92.63075261170302]
Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions.
We investigate methods for aggregating any number of conditional quantile models.
All of the models we consider in this paper can be fit using modern deep learning toolkits.
arXiv Detail & Related papers (2021-02-26T23:21:16Z) - Censored Quantile Regression Forest [81.9098291337097]
We develop a new estimating equation that adapts to censoring and leads to quantile score whenever the data do not exhibit censoring.
The proposed procedure named it censored quantile regression forest, allows us to estimate quantiles of time-to-event without any parametric modeling assumption.
arXiv Detail & Related papers (2020-01-08T23:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.