Improving Deep Learning Model Calibration for Cardiac Applications using Deterministic Uncertainty Networks and Uncertainty-aware Training
- URL: http://arxiv.org/abs/2405.06487v1
- Date: Fri, 10 May 2024 14:07:58 GMT
- Title: Improving Deep Learning Model Calibration for Cardiac Applications using Deterministic Uncertainty Networks and Uncertainty-aware Training
- Authors: Tareen Dawood, Bram Ruijsink, Reza Razavi, Andrew P. King, Esther Puyol-Antón,
- Abstract summary: We evaluate the impact on accuracy and calibration of two types of approach that aim to improve deep learning (DL) classification model calibration.
Specifically, we test the performance of three DUMs and two uncertainty-aware training approaches as well as their combinations.
Our results indicate that both DUMs and uncertainty-aware training can improve both accuracy and calibration in both of our applications.
- Score: 2.0006125576503617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Improving calibration performance in deep learning (DL) classification models is important when planning the use of DL in a decision-support setting. In such a scenario, a confident wrong prediction could lead to a lack of trust and/or harm in a high-risk application. We evaluate the impact on accuracy and calibration of two types of approach that aim to improve DL classification model calibration: deterministic uncertainty methods (DUM) and uncertainty-aware training. Specifically, we test the performance of three DUMs and two uncertainty-aware training approaches as well as their combinations. To evaluate their utility, we use two realistic clinical applications from the field of cardiac imaging: artefact detection from phase contrast cardiac magnetic resonance (CMR) and disease diagnosis from the public ACDC CMR dataset. Our results indicate that both DUMs and uncertainty-aware training can improve both accuracy and calibration in both of our applications, with DUMs generally offering the best improvements. We also investigate the combination of the two approaches, resulting in a novel deterministic uncertainty-aware training approach. This provides further improvements for some combinations of DUMs and uncertainty-aware training approaches.
Related papers
- Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - EDUE: Expert Disagreement-Guided One-Pass Uncertainty Estimation for Medical Image Segmentation [1.757276115858037]
This paper proposes an Expert Disagreement-Guided Uncertainty Estimation (EDUE) for medical image segmentation.
By leveraging variability in ground-truth annotations from multiple raters, we guide the model during training and incorporate random sampling-based strategies to enhance calibration confidence.
arXiv Detail & Related papers (2024-03-25T10:13:52Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Improving Multiple Sclerosis Lesion Segmentation Across Clinical Sites:
A Federated Learning Approach with Noise-Resilient Training [75.40980802817349]
Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area.
We introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions.
We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites.
arXiv Detail & Related papers (2023-08-31T00:36:10Z) - Uncertainty Aware Training to Improve Deep Learning Model Calibration
for Classification of Cardiac MR Images [3.9402047771122812]
Quantifying uncertainty of predictions has been identified as one way to develop more trustworthy AI models.
We evaluate three novel uncertainty-aware training strategies comparing against two state-of-the-art approaches.
arXiv Detail & Related papers (2023-08-29T09:19:49Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - A Close Look into the Calibration of Pre-trained Language Models [56.998539510508515]
Pre-trained language models (PLMs) may fail in giving reliable estimates of their predictive uncertainty.
We study the dynamic change in PLMs' calibration performance in training.
We extend two recently proposed learnable methods that directly collect data to train models to have reasonable confidence estimations.
arXiv Detail & Related papers (2022-10-31T21:31:07Z) - Density-Aware Personalized Training for Risk Prediction in Imbalanced
Medical Data [89.79617468457393]
Training models with imbalance rate (class density discrepancy) may lead to suboptimal prediction.
We propose a framework for training models for this imbalance issue.
We demonstrate our model's improved performance in real-world medical datasets.
arXiv Detail & Related papers (2022-07-23T00:39:53Z) - BSM loss: A superior way in modeling aleatory uncertainty of
fine_grained classification [0.0]
We propose a modified Bootstrapping loss(BS loss) function with Mixup data augmentation strategy.
Our experiments indicated that BS loss with Mixup(BSM) model can halve the Expected Error(ECE) compared to standard data augmentation.
BSM model is able to perceive the semantic distance of out-of-domain data, demonstrating high potential in real-world clinical practice.
arXiv Detail & Related papers (2022-06-09T13:06:51Z) - Can uncertainty boost the reliability of AI-based diagnostic methods in
digital pathology? [3.8424737607413157]
We evaluate if adding uncertainty estimates for DL predictions in digital pathology could result in increased value for the clinical applications.
We compare the effectiveness of model-integrated methods (MC dropout and Deep ensembles) with a model-agnostic approach.
Our results show that uncertainty estimates can add some reliability and reduce sensitivity to classification threshold selection.
arXiv Detail & Related papers (2021-12-17T10:10:00Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.