Informative Priors Improve the Reliability of Multimodal Clinical Data
Classification
- URL: http://arxiv.org/abs/2312.00794v1
- Date: Fri, 17 Nov 2023 03:44:15 GMT
- Title: Informative Priors Improve the Reliability of Multimodal Clinical Data
Classification
- Authors: L. Julian Lechuga Lopez and Tim G. J. Rudner and Farah E. Shamout
- Abstract summary: We consider neural networks and design a tailor-made multimodal data-driven (M2D2) prior distribution over network parameters.
We use simple and scalable mean-field variational inference to train a Bayesian neural network using the M2D2 prior.
Our empirical results show that the proposed method produces a more reliable predictive model compared to deterministic and Bayesian neural network baselines.
- Score: 7.474271086307501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning-aided clinical decision support has the potential to
significantly improve patient care. However, existing efforts in this domain
for principled quantification of uncertainty have largely been limited to
applications of ad-hoc solutions that do not consistently improve reliability.
In this work, we consider stochastic neural networks and design a tailor-made
multimodal data-driven (M2D2) prior distribution over network parameters. We
use simple and scalable Gaussian mean-field variational inference to train a
Bayesian neural network using the M2D2 prior. We train and evaluate the
proposed approach using clinical time-series data in MIMIC-IV and corresponding
chest X-ray images in MIMIC-CXR for the classification of acute care
conditions. Our empirical results show that the proposed method produces a more
reliable predictive model compared to deterministic and Bayesian neural network
baselines.
Related papers
- Bayesian Uncertainty Estimation by Hamiltonian Monte Carlo: Applications to Cardiac MRI Segmentation [3.0665936758208447]
Deep learning methods have achieved state-of-theart performance for many medical image segmentation tasks.
Recent studies show that deep neural networks (DNNs) can be miscalibrated and overconfident, leading to "silent failures"
We propose a Bayesian learning framework using Hamiltonian Monte Carlo (HMC), tempered by cold posterior (CP) to accommodate medical data augmentation.
arXiv Detail & Related papers (2024-03-04T18:47:56Z) - Estimating Epistemic and Aleatoric Uncertainty with a Single Model [5.871583927216653]
We introduce a new approach to ensembling, hyper-diffusion models (HyperDM)
HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles.
We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting.
arXiv Detail & Related papers (2024-02-05T19:39:52Z) - Inadequacy of common stochastic neural networks for reliable clinical
decision support [0.4262974002462632]
Widespread adoption of AI for medical decision making is still hindered due to ethical and safety-related concerns.
Common deep learning approaches, however, have the tendency towards overconfidence under data shift.
This study investigates their actual reliability in clinical applications.
arXiv Detail & Related papers (2024-01-24T18:49:30Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Neural parameter calibration and uncertainty quantification for epidemic
forecasting [0.0]
We apply a novel and powerful computational method to the problem of learning probability densities on contagion parameters.
Using a neural network, we calibrate an ODE model to data of the spread of COVID-19 in Berlin in 2020.
We show convergence of our method to the true posterior on a simplified SIR model of epidemics, and also demonstrate our method's learning capabilities on a reduced dataset.
arXiv Detail & Related papers (2023-12-05T21:34:59Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - An Efficient Confidence Measure-Based Evaluation Metric for Breast
Cancer Screening Using Bayesian Neural Networks [3.834509400202395]
We propose a confidence measure-based evaluation metric for breast cancer screening.
We show that our confidence tuning results in increased accuracy with a reduced set of images with high confidence when compared to the baseline transfer learning.
arXiv Detail & Related papers (2020-08-12T20:34:14Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.