Identifying Causal-Effect Inference Failure with Uncertainty-Aware
Models
- URL: http://arxiv.org/abs/2007.00163v2
- Date: Thu, 22 Oct 2020 20:52:39 GMT
- Title: Identifying Causal-Effect Inference Failure with Uncertainty-Aware
Models
- Authors: Andrew Jesson, S\"oren Mindermann, Uri Shalit and Yarin Gal
- Abstract summary: We introduce a practical approach for integrating uncertainty estimation into a class of state-of-the-art neural network methods.
We show that our methods enable us to deal gracefully with situations of "no-overlap", common in high-dimensional data.
We show that correctly modeling uncertainty can keep us from giving overconfident and potentially harmful recommendations.
- Score: 41.53326337725239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommending the best course of action for an individual is a major
application of individual-level causal effect estimation. This application is
often needed in safety-critical domains such as healthcare, where estimating
and communicating uncertainty to decision-makers is crucial. We introduce a
practical approach for integrating uncertainty estimation into a class of
state-of-the-art neural network methods used for individual-level causal
estimates. We show that our methods enable us to deal gracefully with
situations of "no-overlap", common in high-dimensional data, where standard
applications of causal effect approaches fail. Further, our methods allow us to
handle covariate shift, where test distribution differs to train distribution,
common when systems are deployed in practice. We show that when such a
covariate shift occurs, correctly modeling uncertainty can keep us from giving
overconfident and potentially harmful recommendations. We demonstrate our
methodology with a range of state-of-the-art models. Under both covariate shift
and lack of overlap, our uncertainty-equipped methods can alert decisions
makers when predictions are not to be trusted while outperforming their
uncertainty-oblivious counterparts.
Related papers
- One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Adversarial Attack for Uncertainty Estimation: Identifying Critical
Regions in Neural Networks [0.0]
We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty.
Uncertainty estimates are derived from the input perturbations, unlike previous studies that provide perturbations on the model's parameters.
We show that the proposed method has revealed a significant outperformance over other methods and provided less risk to capture model uncertainty in machine learning.
arXiv Detail & Related papers (2021-07-15T21:30:26Z) - Accounting for Model Uncertainty in Algorithmic Discrimination [16.654676310264705]
We argue that the fairness approaches should instead focus only on equalizing errors arising due to model uncertainty.
We draw a connection between predictive multiplicity and model uncertainty and argue that the techniques from predictive multiplicity could be used to identify errors made due to model uncertainty.
arXiv Detail & Related papers (2021-05-10T10:34:12Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - MissDeepCausal: Causal Inference from Incomplete Data Using Deep Latent
Variable Models [14.173184309520453]
State-of-the-art methods for causal inference don't consider missing values.
Missing data require an adapted unconfoundedness hypothesis.
Latent confounders whose distribution is learned through variational autoencoders adapted to missing values are considered.
arXiv Detail & Related papers (2020-02-25T12:58:07Z) - Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning [17.10036674236381]
Wrong predictions for out-of-distribution data can cause safety critical situations in machine learning systems.
We propose a framework for uncertainty-based OOD classification: UBOOD.
We show that UBOOD produces reliable classification results when combined with ensemble-based estimators.
arXiv Detail & Related papers (2019-12-31T09:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.