Can input reconstruction be used to directly estimate uncertainty of a
regression U-Net model? -- Application to proton therapy dose prediction for
head and neck cancer patients
- URL: http://arxiv.org/abs/2310.19686v1
- Date: Mon, 30 Oct 2023 16:04:34 GMT
- Title: Can input reconstruction be used to directly estimate uncertainty of a
regression U-Net model? -- Application to proton therapy dose prediction for
head and neck cancer patients
- Authors: Margerie Huet-Dastarac, Dan Nguyen, Steve Jiang, John Lee, Ana
Barragan Montero
- Abstract summary: We present an alternative direct uncertainty estimation method and apply it for a regression U-Net architecture.
For the proof-of-concept, our method is applied to proton therapy dose prediction in head and neck cancer patients.
- Score: 0.8343441027226364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating the uncertainty of deep learning models in a reliable and
efficient way has remained an open problem, where many different solutions have
been proposed in the literature. Most common methods are based on Bayesian
approximations, like Monte Carlo dropout (MCDO) or Deep ensembling (DE), but
they have a high inference time (i.e. require multiple inference passes) and
might not work for out-of-distribution detection (OOD) data (i.e. similar
uncertainty for in-distribution (ID) and OOD). In safety critical environments,
like medical applications, accurate and fast uncertainty estimation methods,
able to detect OOD data, are crucial, since wrong predictions can jeopardize
patients safety. In this study, we present an alternative direct uncertainty
estimation method and apply it for a regression U-Net architecture. The method
consists in the addition of a branch from the bottleneck which reconstructs the
input. The input reconstruction error can be used as a surrogate of the model
uncertainty. For the proof-of-concept, our method is applied to proton therapy
dose prediction in head and neck cancer patients. Accuracy, time-gain, and OOD
detection are analyzed for our method in this particular application and
compared with the popular MCDO and DE. The input reconstruction method showed a
higher Pearson correlation coefficient with the prediction error (0.620) than
DE and MCDO (between 0.447 and 0.612). Moreover, our method allows an easier
identification of OOD (Z-score of 34.05). It estimates the uncertainty
simultaneously to the regression task, therefore requires less time or
computational resources.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - Inadequacy of common stochastic neural networks for reliable clinical
decision support [0.4262974002462632]
Widespread adoption of AI for medical decision making is still hindered due to ethical and safety-related concerns.
Common deep learning approaches, however, have the tendency towards overconfidence under data shift.
This study investigates their actual reliability in clinical applications.
arXiv Detail & Related papers (2024-01-24T18:49:30Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Improving Out-of-Distribution Detection via Epistemic Uncertainty
Adversarial Training [29.4569172720654]
We develop a simple adversarial training scheme that incorporates an attack of the uncertainty predicted by the dropout ensemble.
We demonstrate this method improves OOD detection performance on standard data (i.e., not adversarially crafted), and improves the standardized partial AUC from near-random guessing performance to $geq 0.75$.
arXiv Detail & Related papers (2022-09-05T14:32:19Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Improving Deterministic Uncertainty Estimation in Deep Learning for
Classification and Regression [30.112634874443494]
We propose a new model that estimates uncertainty in a single forward pass.
Our approach combines a bi-Lipschitz feature extractor with an inducing point approximate Gaussian process, offering robust and principled uncertainty estimation.
arXiv Detail & Related papers (2021-02-22T23:29:12Z) - Increasing the efficiency of randomized trial estimates via linear
adjustment for a prognostic score [59.75318183140857]
Estimating causal effects from randomized experiments is central to clinical research.
Most methods for historical borrowing achieve reductions in variance by sacrificing strict type-I error rate control.
arXiv Detail & Related papers (2020-12-17T21:10:10Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z) - Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning [17.10036674236381]
Wrong predictions for out-of-distribution data can cause safety critical situations in machine learning systems.
We propose a framework for uncertainty-based OOD classification: UBOOD.
We show that UBOOD produces reliable classification results when combined with ensemble-based estimators.
arXiv Detail & Related papers (2019-12-31T09:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.