Uncertainty-Aware Training for Cardiac Resynchronisation Therapy
Response Prediction
- URL: http://arxiv.org/abs/2109.10641v1
- Date: Wed, 22 Sep 2021 10:37:50 GMT
- Title: Uncertainty-Aware Training for Cardiac Resynchronisation Therapy
Response Prediction
- Authors: Tareen Dawood, Chen Chen, Robin Andlauer, Baldeep S. Sidhu, Bram
Ruijsink, Justin Gould, Bradley Porter, Mark Elliott, Vishal Mehta, C. Aldo
Rinaldi, Esther Puyol-Ant\'on, Reza Razavi, Andrew P. King
- Abstract summary: Quantifying uncertainty of a prediction is one way to provide such interpretability and promote trust.
We quantify the data (aleatoric) and model (epistemic) uncertainty of a DL model for Cardiac Resynchronisation Therapy response prediction from cardiac magnetic resonance images.
We perform a preliminary investigation of an uncertainty-aware loss function that can be used to retrain an existing DL image-based classification model to encourage confidence in correct predictions.
- Score: 3.090173647095682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluation of predictive deep learning (DL) models beyond conventional
performance metrics has become increasingly important for applications in
sensitive environments like healthcare. Such models might have the capability
to encode and analyse large sets of data but they often lack comprehensive
interpretability methods, preventing clinical trust in predictive outcomes.
Quantifying uncertainty of a prediction is one way to provide such
interpretability and promote trust. However, relatively little attention has
been paid to how to include such requirements into the training of the model.
In this paper we: (i) quantify the data (aleatoric) and model (epistemic)
uncertainty of a DL model for Cardiac Resynchronisation Therapy response
prediction from cardiac magnetic resonance images, and (ii) propose and perform
a preliminary investigation of an uncertainty-aware loss function that can be
used to retrain an existing DL image-based classification model to encourage
confidence in correct predictions and reduce confidence in incorrect
predictions. Our initial results are promising, showing a significant increase
in the (epistemic) confidence of true positive predictions, with some evidence
of a reduction in false negative confidence.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Deep Evidential Learning for Radiotherapy Dose Prediction [0.0]
We present a novel application of an uncertainty-quantification framework called Deep Evidential Learning in the domain of radiotherapy dose prediction.
We found that this model can be effectively harnessed to yield uncertainty estimates that inherited correlations with prediction errors upon completion of network training.
arXiv Detail & Related papers (2024-04-26T02:43:45Z) - HypUC: Hyperfine Uncertainty Calibration with Gradient-boosted
Corrections for Reliable Regression on Imbalanced Electrocardiograms [3.482894964998886]
We propose HypUC, a framework for imbalanced probabilistic regression in medical time series.
HypUC is evaluated on a large, diverse, real-world dataset of ECGs collected from millions of patients.
arXiv Detail & Related papers (2023-11-23T06:17:31Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Loss Estimators Improve Model Generalization [36.520569284970456]
We propose to train a loss estimator alongside the predictive model, using a contrastive training objective, to directly estimate the prediction uncertainties.
We show the impact of loss estimators on model generalization, in terms of both its fidelity on in-distribution data and its ability to detect out of distribution samples or new classes unseen during training.
arXiv Detail & Related papers (2021-03-05T16:35:10Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z) - Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications [2.446672595462589]
The Intensive Care Unit (ICU) is a hospital department where machine learning has the potential to provide valuable assistance in clinical decision making.
In practice, uncertain predictions should be presented to doctors with extra care in order to prevent potentially catastrophic treatment decisions.
We show how Bayesian modelling and the predictive uncertainty that it provides can be used to mitigate risk of misguided prediction.
arXiv Detail & Related papers (2019-06-20T13:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.