Model Agnostic Explainable Selective Regression via Uncertainty
Estimation
- URL: http://arxiv.org/abs/2311.09145v1
- Date: Wed, 15 Nov 2023 17:40:48 GMT
- Title: Model Agnostic Explainable Selective Regression via Uncertainty
Estimation
- Authors: Andrea Pugnana, Carlos Mougan, Dan Saattrup Nielsen
- Abstract summary: This paper presents a novel approach to selective regression that utilizes model-agnostic non-parametric uncertainty estimation.
Our proposed framework showcases superior performance compared to state-of-the-art selective regressors.
We implement our selective regression method in the open-source Python package doubt and release the code used to reproduce our experiments.
- Score: 15.331332191290727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the wide adoption of machine learning techniques, requirements have
evolved beyond sheer high performance, often requiring models to be
trustworthy. A common approach to increase the trustworthiness of such systems
is to allow them to refrain from predicting. Such a framework is known as
selective prediction. While selective prediction for classification tasks has
been widely analyzed, the problem of selective regression is understudied. This
paper presents a novel approach to selective regression that utilizes
model-agnostic non-parametric uncertainty estimation. Our proposed framework
showcases superior performance compared to state-of-the-art selective
regressors, as demonstrated through comprehensive benchmarking on 69 datasets.
Finally, we use explainable AI techniques to gain an understanding of the
drivers behind selective regression. We implement our selective regression
method in the open-source Python package doubt and release the code used to
reproduce our experiments.
Related papers
- Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - Conformalized Selective Regression [2.3964255330849356]
We propose a novel approach to selective regression by leveraging conformal prediction.
We show how our proposed approach, conformalized selective regression, demonstrates an advantage over multiple state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-26T04:43:50Z) - Uncertainty-aware Language Modeling for Selective Question Answering [107.47864420630923]
We present an automatic large language model (LLM) conversion approach that produces uncertainty-aware LLMs.
Our approach is model- and data-agnostic, is computationally-efficient, and does not rely on external models or systems.
arXiv Detail & Related papers (2023-11-26T22:47:54Z) - Leveraging Uncertainty Estimates To Improve Classifier Performance [4.4951754159063295]
Binary classification involves predicting the label of an instance based on whether the model score for the positive class exceeds a threshold chosen based on the application requirements.
However, model scores are often not aligned with the true positivity rate.
This is especially true when the training involves a differential sampling across classes or there is distributional drift between train and test settings.
arXiv Detail & Related papers (2023-11-20T12:40:25Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - ResMem: Learn what you can and memorize the rest [79.19649788662511]
We propose the residual-memorization (ResMem) algorithm to augment an existing prediction model.
By construction, ResMem can explicitly memorize the training labels.
We show that ResMem consistently improves the test set generalization of the original prediction model.
arXiv Detail & Related papers (2023-02-03T07:12:55Z) - Gumbel-Softmax Selective Networks [10.074545631396385]
This paper presents a general method for training selective networks that enables selection within an end-to-end differentiable training framework.
Experiments on public datasets demonstrate the potential of Gumbel-softmax selective networks for selective regression and classification.
arXiv Detail & Related papers (2022-11-19T02:20:14Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Ridge Regression Revisited: Debiasing, Thresholding and Bootstrap [4.142720557665472]
ridge regression may be worth another look since -- after debiasing and thresholding -- it may offer some advantages over the Lasso.
In this paper, we define a debiased and thresholded ridge regression method, and prove a consistency result and a Gaussian approximation theorem.
In addition to estimation, we consider the problem of prediction, and present a novel, hybrid bootstrap algorithm tailored for prediction intervals.
arXiv Detail & Related papers (2020-09-17T05:04:10Z) - Censored Quantile Regression Forest [81.9098291337097]
We develop a new estimating equation that adapts to censoring and leads to quantile score whenever the data do not exhibit censoring.
The proposed procedure named it censored quantile regression forest, allows us to estimate quantiles of time-to-event without any parametric modeling assumption.
arXiv Detail & Related papers (2020-01-08T23:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.