Detecting and Mitigating Test-time Failure Risks via Model-agnostic
Uncertainty Learning
- URL: http://arxiv.org/abs/2109.04432v1
- Date: Thu, 9 Sep 2021 17:23:31 GMT
- Title: Detecting and Mitigating Test-time Failure Risks via Model-agnostic
Uncertainty Learning
- Authors: Preethi Lahoti, Krishna P. Gummadi, and Gerhard Weikum
- Abstract summary: This paper introduces Risk Advisor, a novel post-hoc meta-learner for estimating failure risks and predictive uncertainties of any already-trained black-box classification model.
In addition to providing a risk score, the Risk Advisor decomposes the uncertainty estimates into aleatoric and epistemic uncertainty components.
Experiments on various families of black-box classification models and on real-world and synthetic datasets show that the Risk Advisor reliably predicts deployment-time failure risks.
- Score: 30.86992077157326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliably predicting potential failure risks of machine learning (ML) systems
when deployed with production data is a crucial aspect of trustworthy AI. This
paper introduces Risk Advisor, a novel post-hoc meta-learner for estimating
failure risks and predictive uncertainties of any already-trained black-box
classification model. In addition to providing a risk score, the Risk Advisor
decomposes the uncertainty estimates into aleatoric and epistemic uncertainty
components, thus giving informative insights into the sources of uncertainty
inducing the failures. Consequently, Risk Advisor can distinguish between
failures caused by data variability, data shifts and model limitations and
advise on mitigation actions (e.g., collecting more data to counter data
shift). Extensive experiments on various families of black-box classification
models and on real-world and synthetic datasets covering common ML failure
scenarios show that the Risk Advisor reliably predicts deployment-time failure
risks in all the scenarios, and outperforms strong baselines.
Related papers
- Enhancing Data Quality through Self-learning on Imbalanced Financial Risk Data [11.910955398918444]
This study investigates data pre-processing techniques to enhance existing financial risk datasets.
We introduce TriEnhance, a straightforward technique that entails: (1) generating synthetic samples specifically tailored to the minority class, (2) filtering using binary feedback to refine samples, and (3) self-learning with pseudo-labels.
Our experiments reveal the efficacy of TriEnhance, with a notable focus on improving minority class calibration, a key factor for developing more robust financial risk prediction systems.
arXiv Detail & Related papers (2024-09-15T16:59:15Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Predictive Uncertainty Quantification via Risk Decompositions for Strictly Proper Scoring Rules [7.0549244915538765]
Uncertainty in predictive modeling often relies on ad hoc methods.
This paper introduces a theoretical approach to understanding uncertainty through statistical risks.
We show how to split pointwise risk into Bayes risk and excess risk.
arXiv Detail & Related papers (2024-02-16T14:40:22Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Distribution-free risk assessment of regression-based machine learning
algorithms [6.507711025292814]
We focus on regression algorithms and the risk-assessment task of computing the probability of the true label lying inside an interval defined around the model's prediction.
We solve the risk-assessment problem using the conformal prediction approach, which provides prediction intervals that are guaranteed to contain the true label with a given probability.
arXiv Detail & Related papers (2023-10-05T13:57:24Z) - Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based
Reinforcement Learning [26.497229327357935]
We introduce a simple but effective method for managing risk in model-based reinforcement learning with trajectory sampling.
Experiments indicate that the separation of uncertainties is essential to performing well with data-driven approaches in uncertain and safety-critical control environments.
arXiv Detail & Related papers (2023-09-11T16:10:58Z) - A Generalized Unbiased Risk Estimator for Learning with Augmented
Classes [70.20752731393938]
Given unlabeled data, an unbiased risk estimator (URE) can be derived, which can be minimized for LAC with theoretical guarantees.
We propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees.
arXiv Detail & Related papers (2023-06-12T06:52:04Z) - Safe Deployment for Counterfactual Learning to Rank with Exposure-Based
Risk Minimization [63.93275508300137]
We introduce a novel risk-aware Counterfactual Learning To Rank method with theoretical guarantees for safe deployment.
Our experimental results demonstrate the efficacy of our proposed method, which is effective at avoiding initial periods of bad performance when little data is available.
arXiv Detail & Related papers (2023-04-26T15:54:23Z) - Risk Sensitive Model-Based Reinforcement Learning using Uncertainty
Guided Planning [0.0]
In this paper, risk sensitivity is promoted in a model-based reinforcement learning algorithm.
We propose uncertainty guided cross-entropy method planning, which penalises action sequences that result in high variance state predictions.
Experiments display the ability for the agent to identify uncertain regions of the state space during planning and to take actions that maintain the agent within high confidence areas.
arXiv Detail & Related papers (2021-11-09T07:28:00Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.