To Predict or to Reject: Causal Effect Estimation with Uncertainty on
Networked Data
- URL: http://arxiv.org/abs/2309.08165v1
- Date: Fri, 15 Sep 2023 05:25:43 GMT
- Title: To Predict or to Reject: Causal Effect Estimation with Uncertainty on
Networked Data
- Authors: Hechuan Wen, Tong Chen, Li Kheng Chai, Shazia Sadiq, Kai Zheng,
Hongzhi Yin
- Abstract summary: GraphDKL is the first framework to tackle the violation of positivity assumption when performing causal effect estimation with graphs.
With extensive experiments, we demonstrate the superiority of our proposed method in uncertainty-aware causal effect estimation on networked data.
- Score: 36.31936265985164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the imbalanced nature of networked observational data, the causal
effect predictions for some individuals can severely violate the
positivity/overlap assumption, rendering unreliable estimations. Nevertheless,
this potential risk of individual-level treatment effect estimation on
networked data has been largely under-explored. To create a more trustworthy
causal effect estimator, we propose the uncertainty-aware graph deep kernel
learning (GraphDKL) framework with Lipschitz constraint to model the prediction
uncertainty with Gaussian process and identify unreliable estimations. To the
best of our knowledge, GraphDKL is the first framework to tackle the violation
of positivity assumption when performing causal effect estimation with graphs.
With extensive experiments, we demonstrate the superiority of our proposed
method in uncertainty-aware causal effect estimation on networked data.
Related papers
- Uncertainty for Active Learning on Graphs [70.44714133412592]
Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models.
We benchmark Uncertainty Sampling beyond predictive uncertainty and highlight a significant performance gap to other Active Learning strategies.
We develop ground-truth Bayesian uncertainty estimates in terms of the data generating process and prove their effectiveness in guiding Uncertainty Sampling toward optimal queries.
arXiv Detail & Related papers (2024-05-02T16:50:47Z) - On the Impact of Uncertainty and Calibration on Likelihood-Ratio Membership Inference Attacks [42.18575921329484]
We analyze the performance of the state-of-the-art likelihood ratio attack (LiRA) within an information-theoretical framework.
We derive bounds on the advantage of an MIA adversary with the aim of offering insights into the impact of uncertainty and calibration on the effectiveness of MIAs.
arXiv Detail & Related papers (2024-02-16T13:41:18Z) - Task-Driven Causal Feature Distillation: Towards Trustworthy Risk
Prediction [19.475933293993076]
We propose a Task-Driven Causal Feature Distillation model (TDCFD) to transform original feature values into causal feature attributions.
After the causal feature distillation, a deep neural network is applied to produce trustworthy prediction results.
We evaluate the performance of our TDCFD method on several synthetic and real datasets.
arXiv Detail & Related papers (2023-12-20T08:16:53Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Uncertainty-Aware Training for Cardiac Resynchronisation Therapy
Response Prediction [3.090173647095682]
Quantifying uncertainty of a prediction is one way to provide such interpretability and promote trust.
We quantify the data (aleatoric) and model (epistemic) uncertainty of a DL model for Cardiac Resynchronisation Therapy response prediction from cardiac magnetic resonance images.
We perform a preliminary investigation of an uncertainty-aware loss function that can be used to retrain an existing DL image-based classification model to encourage confidence in correct predictions.
arXiv Detail & Related papers (2021-09-22T10:37:50Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.