Uncertainty-Aware Deep Ensembles for Reliable and Explainable
Predictions of Clinical Time Series
- URL: http://arxiv.org/abs/2010.11310v1
- Date: Fri, 16 Oct 2020 10:32:06 GMT
- Title: Uncertainty-Aware Deep Ensembles for Reliable and Explainable
Predictions of Clinical Time Series
- Authors: Kristoffer Wickstr{\o}m, Karl {\O}yvind Mikalsen, Michael Kampffmeyer,
Arthur Revhaug, Robert Jenssen
- Abstract summary: We propose a deep ensemble approach for explaining deep learning-based time series predictions.
A measure of uncertainty in the relevance scores is computed by taking the standard deviation across the relevance scores produced by each model.
Results demonstrate that the proposed ensemble is more accurate in locating relevant time steps.
- Score: 21.11327248500246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based support systems have demonstrated encouraging results in
numerous clinical applications involving the processing of time series data.
While such systems often are very accurate, they have no inherent mechanism for
explaining what influenced the predictions, which is critical for clinical
tasks. However, existing explainability techniques lack an important component
for trustworthy and reliable decision support, namely a notion of uncertainty.
In this paper, we address this lack of uncertainty by proposing a deep ensemble
approach where a collection of DNNs are trained independently. A measure of
uncertainty in the relevance scores is computed by taking the standard
deviation across the relevance scores produced by each model in the ensemble,
which in turn is used to make the explanations more reliable. The class
activation mapping method is used to assign a relevance score for each time
step in the time series. Results demonstrate that the proposed ensemble is more
accurate in locating relevant time steps and is more consistent across random
initializations, thus making the model more trustworthy. The proposed
methodology paves the way for constructing trustworthy and dependable support
systems for processing clinical time series for healthcare related tasks.
Related papers
- Uncertainty-Aware Optimal Treatment Selection for Clinical Time Series [4.656302602746229]
This paper introduces a novel method integrating counterfactual estimation techniques and uncertainty quantification.
We validate our method using two simulated datasets, one focused on the cardiovascular system and the other on COVID-19.
Our findings indicate that our method has robust performance across different counterfactual estimation baselines.
arXiv Detail & Related papers (2024-10-11T13:56:25Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Inadequacy of common stochastic neural networks for reliable clinical
decision support [0.4262974002462632]
Widespread adoption of AI for medical decision making is still hindered due to ethical and safety-related concerns.
Common deep learning approaches, however, have the tendency towards overconfidence under data shift.
This study investigates their actual reliability in clinical applications.
arXiv Detail & Related papers (2024-01-24T18:49:30Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Score Matching-based Pseudolikelihood Estimation of Neural Marked
Spatio-Temporal Point Process with Uncertainty Quantification [59.81904428056924]
We introduce SMASH: a Score MAtching estimator for learning markedPs with uncertainty quantification.
Specifically, our framework adopts a normalization-free objective by estimating the pseudolikelihood of markedPs through score-matching.
The superior performance of our proposed framework is demonstrated through extensive experiments in both event prediction and uncertainty quantification.
arXiv Detail & Related papers (2023-10-25T02:37:51Z) - Benchmarking Scalable Epistemic Uncertainty Quantification in Organ
Segmentation [7.313010190714819]
quantifying uncertainty associated with model predictions is crucial in critical clinical applications.
Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning.
It is unclear which method is preferred in the medical image analysis setting.
arXiv Detail & Related papers (2023-08-15T00:09:33Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Uncertainty-Aware Multiple Instance Learning fromLarge-Scale Long Time
Series Data [20.2087807816461]
This paper proposes an uncertainty-aware multiple instance (MIL) framework to identify the most relevant periodautomatically.
We further incorporate another modality toaccommodate unreliable predictions by training a separate model and conduct uncertainty aware fusion.
Empirical resultsdemonstrate that the proposed method can effectively detect thetypes of vessels based on the trajectory.
arXiv Detail & Related papers (2021-11-16T17:09:02Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Benchmarking Deep Learning Interpretability in Time Series Predictions [41.13847656750174]
Saliency methods are used extensively to highlight the importance of input features in model predictions.
We set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures.
arXiv Detail & Related papers (2020-10-26T22:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.