Beyond Voxel Prediction Uncertainty: Identifying brain lesions you can
trust
- URL: http://arxiv.org/abs/2209.10877v1
- Date: Thu, 22 Sep 2022 09:20:05 GMT
- Title: Beyond Voxel Prediction Uncertainty: Identifying brain lesions you can
trust
- Authors: Benjamin Lambert, Florence Forbes, Senan Doyle, Alan Tucholka and
Michel Dojat
- Abstract summary: Deep neural networks have become the gold-standard approach for the automated segmentation of 3D medical images.
In this work, we propose to go beyond voxel-wise assessment using an innovative Graph Neural Network approach.
This network allows the fusion of three estimators of voxel uncertainty: entropy, variance, and model's confidence.
- Score: 1.1199585259018459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have become the gold-standard approach for the automated
segmentation of 3D medical images. Their full acceptance by clinicians remains
however hampered by the lack of intelligible uncertainty assessment of the
provided results. Most approaches to quantify their uncertainty, such as the
popular Monte Carlo dropout, restrict to some measure of uncertainty in
prediction at the voxel level. In addition not to be clearly related to genuine
medical uncertainty, this is not clinically satisfying as most objects of
interest (e.g. brain lesions) are made of groups of voxels whose overall
relevance may not simply reduce to the sum or mean of their individual
uncertainties. In this work, we propose to go beyond voxel-wise assessment
using an innovative Graph Neural Network approach, trained from the outputs of
a Monte Carlo dropout model. This network allows the fusion of three estimators
of voxel uncertainty: entropy, variance, and model's confidence; and can be
applied to any lesion, regardless of its shape or size. We demonstrate the
superiority of our approach for uncertainty estimate on a task of Multiple
Sclerosis lesions segmentation.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Expert-aware uncertainty estimation for quality control of neural-based blood typing [0.0]
In medical diagnostics, accurate uncertainty estimation for neural-based models is essential for complementing second-opinion systems.
A major difficulty here is the lack of labels on the hardness of examples, making the uncertainty estimation problem almost unsupervised.
Our novel approach integrates expert assessments of case complexity into the neural network's learning process, utilizing both definitive target labels and supplementary complexity ratings.
Experiments demonstrate enhancement of our approach in uncertainty prediction, achieving a 2.5-fold improvement with expert labels and a 35% increase in performance with estimates of neural-based expert consensus.
arXiv Detail & Related papers (2024-07-15T19:07:02Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Disentangled Uncertainty and Out of Distribution Detection in Medical
Generative Models [7.6146285961466]
We study disentangled uncertainties in image to image translation tasks in the medical domain.
We use CycleGAN to convert T1-weighted brain MRI scans to T2-weighted brain MRI scans.
arXiv Detail & Related papers (2022-11-11T14:45:16Z) - BayesNetCNN: incorporating uncertainty in neural networks for
image-based classification tasks [0.29005223064604074]
We propose a method to convert a standard neural network into a Bayesian neural network.
We estimate the variability of predictions by sampling different networks similar to the original one at each forward pass.
We test our model in a large cohort of brain images from Alzheimer's Disease patients.
arXiv Detail & Related papers (2022-09-27T01:07:19Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Confidence Aware Neural Networks for Skin Cancer Detection [12.300911283520719]
We present three different methods for quantifying uncertainties for skin cancer detection from images.
The obtained results reveal that the predictive uncertainty estimation methods are capable of flagging risky and erroneous predictions.
We also demonstrate that ensemble approaches are more reliable in capturing uncertainties through inference.
arXiv Detail & Related papers (2021-07-19T19:21:57Z) - Joint Dermatological Lesion Classification and Confidence Modeling with
Uncertainty Estimation [23.817227116949958]
We propose an overall framework that jointly considers dermatological classification and uncertainty estimation together.
The estimated confidence of each feature to avoid uncertain feature and undesirable shift is pooled from confidence network.
We demonstrate the potential of the proposed approach in two state-of-the-art dermoscopic datasets.
arXiv Detail & Related papers (2021-07-19T11:54:37Z) - Deterministic Neural Networks with Appropriate Inductive Biases Capture
Epistemic and Aleatoric Uncertainty [91.01037972035635]
We show that a single softmax neural net with minimal changes can beat the uncertainty predictions of Deep Ensembles.
We study why, and show that with the right inductive biases, softmax neural nets trained with maximum likelihood reliably capture uncertainty through the feature-space density.
arXiv Detail & Related papers (2021-02-23T09:44:09Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.