Improving Image-Based Precision Medicine with Uncertainty-Aware Causal
Models
- URL: http://arxiv.org/abs/2305.03829v4
- Date: Thu, 10 Aug 2023 15:51:03 GMT
- Title: Improving Image-Based Precision Medicine with Uncertainty-Aware Causal
Models
- Authors: Joshua Durso-Finley, Jean-Pierre Falet, Raghav Mehta, Douglas L.
Arnold, Nick Pawlowski, Tal Arbel
- Abstract summary: We use Bayesian deep learning for estimating the posterior distribution over factual and counterfactual outcomes on several treatments.
We train and evaluate this model to predict future new and enlarging T2 lesion counts on a large, multi-center dataset of MR brain images of patients with multiple sclerosis.
- Score: 3.5770353345663053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image-based precision medicine aims to personalize treatment decisions based
on an individual's unique imaging features so as to improve their clinical
outcome. Machine learning frameworks that integrate uncertainty estimation as
part of their treatment recommendations would be safer and more reliable.
However, little work has been done in adapting uncertainty estimation
techniques and validation metrics for precision medicine. In this paper, we use
Bayesian deep learning for estimating the posterior distribution over factual
and counterfactual outcomes on several treatments. This allows for estimating
the uncertainty for each treatment option and for the individual treatment
effects (ITE) between any two treatments. We train and evaluate this model to
predict future new and enlarging T2 lesion counts on a large, multi-center
dataset of MR brain images of patients with multiple sclerosis, exposed to
several treatments during randomized controlled trials. We evaluate the
correlation of the uncertainty estimate with the factual error, and, given the
lack of ground truth counterfactual outcomes, demonstrate how uncertainty for
the ITE prediction relates to bounds on the ITE error. Lastly, we demonstrate
how knowledge of uncertainty could modify clinical decision-making to improve
individual patient and clinical trial outcomes.
Related papers
- Quantifying Aleatoric Uncertainty of the Treatment Effect: A Novel Orthogonal Learner [72.20769640318969]
Estimating causal quantities from observational data is crucial for understanding the safety and effectiveness of medical treatments.
Medical practitioners require not only estimating averaged causal quantities, but also understanding the randomness of the treatment effect as a random variable.
This randomness is referred to as aleatoric uncertainty and is necessary for understanding the probability of benefit from treatment or quantiles of the treatment effect.
arXiv Detail & Related papers (2024-11-05T18:14:49Z) - Uncertainty-Aware Optimal Treatment Selection for Clinical Time Series [4.656302602746229]
This paper introduces a novel method integrating counterfactual estimation techniques and uncertainty quantification.
We validate our method using two simulated datasets, one focused on the cardiovascular system and the other on COVID-19.
Our findings indicate that our method has robust performance across different counterfactual estimation baselines.
arXiv Detail & Related papers (2024-10-11T13:56:25Z) - Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles [4.249986624493547]
Ensemble deep learning has been shown to achieve high predictive accuracy and uncertainty estimation.
perturbations in the input images at test time can still lead to significant performance degradation.
LaDiNE is a novel and robust probabilistic method that is capable of inferring informative and invariant latent variables from the input images.
arXiv Detail & Related papers (2023-10-24T15:53:07Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Uncertainty estimations methods for a deep learning model to aid in
clinical decision-making -- a clinician's perspective [0.0]
There are several deep learning-inspired uncertainty estimation techniques, but few are implemented on medical datasets.
We compared dropout variational inference (DO), test-time augmentation (TTA), conformal predictions, and single deterministic methods for estimating uncertainty.
It may be important to evaluate multiple estimations techniques before incorporating a model into clinical practice.
arXiv Detail & Related papers (2022-10-02T17:54:54Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - To Impute or not to Impute? -- Missing Data in Treatment Effect
Estimation [84.76186111434818]
We identify a new missingness mechanism, which we term mixed confounded missingness (MCM), where some missingness determines treatment selection and other missingness is determined by treatment selection.
We show that naively imputing all data leads to poor performing treatment effects models, as the act of imputation effectively removes information necessary to provide unbiased estimates.
Our solution is selective imputation, where we use insights from MCM to inform precisely which variables should be imputed and which should not.
arXiv Detail & Related papers (2022-02-04T12:08:31Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Integrating uncertainty in deep neural networks for MRI based stroke
analysis [0.0]
We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images.
In a cohort of 511 patients, our CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart.
arXiv Detail & Related papers (2020-08-13T09:50:17Z) - Uncertainty estimation for classification and risk prediction on medical
tabular data [0.0]
This work advances the understanding of uncertainty estimation for classification and risk prediction on medical data.
In a data-scarce field such as healthcare, the ability to measure the uncertainty of a model's prediction could potentially lead to improved effectiveness of decision support tools.
arXiv Detail & Related papers (2020-04-13T08:46:41Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.