Trustworthy Medical Segmentation with Uncertainty Estimation
- URL: http://arxiv.org/abs/2111.05978v1
- Date: Wed, 10 Nov 2021 22:46:05 GMT
- Title: Trustworthy Medical Segmentation with Uncertainty Estimation
- Authors: Giuseppina Carannante, Dimah Dera, Nidhal C.Bouaynaya, Rasool Ghulam,
and Hassan M. Fathallah-Shaykh
- Abstract summary: This paper introduces a new Bayesian deep learning framework for uncertainty quantification in segmentation neural networks.
We evaluate the proposed framework on medical image segmentation data from Magnetic Resonances Imaging and Computed Tomography scans.
Our experiments on multiple benchmark datasets demonstrate that the proposed framework is more robust to noise and adversarial attacks as compared to state-of-the-art segmentation models.
- Score: 0.7829352305480285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning (DL) holds great promise in reshaping the healthcare systems
given its precision, efficiency, and objectivity. However, the brittleness of
DL models to noisy and out-of-distribution inputs is ailing their deployment in
the clinic. Most systems produce point estimates without further information
about model uncertainty or confidence. This paper introduces a new Bayesian
deep learning framework for uncertainty quantification in segmentation neural
networks, specifically encoder-decoder architectures. The proposed framework
uses the first-order Taylor series approximation to propagate and learn the
first two moments (mean and covariance) of the distribution of the model
parameters given the training data by maximizing the evidence lower bound. The
output consists of two maps: the segmented image and the uncertainty map of the
segmentation. The uncertainty in the segmentation decisions is captured by the
covariance matrix of the predictive distribution. We evaluate the proposed
framework on medical image segmentation data from Magnetic Resonances Imaging
and Computed Tomography scans. Our experiments on multiple benchmark datasets
demonstrate that the proposed framework is more robust to noise and adversarial
attacks as compared to state-of-the-art segmentation models. Moreover, the
uncertainty map of the proposed framework associates low confidence (or
equivalently high uncertainty) to patches in the test input images that are
corrupted with noise, artifacts or adversarial attacks. Thus, the model can
self-assess its segmentation decisions when it makes an erroneous prediction or
misses part of the segmentation structures, e.g., tumor, by presenting higher
values in the uncertainty map.
Related papers
- Anatomically-aware Uncertainty for Semi-supervised Image Segmentation [12.175556059523863]
Semi-supervised learning relaxes the need of large pixel-wise labeled datasets for image segmentation by leveraging unlabeled data.
Uncertainty estimation methods rely on multiple inferences from the model predictions that must be computed for each training step.
This work proposes a novel method to estimate segmentation uncertainty by leveraging global information from the segmentation masks.
arXiv Detail & Related papers (2023-10-24T18:03:07Z) - Hierarchical Uncertainty Estimation for Medical Image Segmentation
Networks [1.9564356751775307]
Uncertainty exists in both images (noise) and manual annotations (human errors and bias) used for model training.
We propose a simple yet effective method for estimating uncertainties at multiple levels.
We demonstrate that a deep learning segmentation network such as U-net, can achieve a high segmentation performance.
arXiv Detail & Related papers (2023-08-16T16:09:23Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical
Image Segmentation [92.9634065964963]
We present a new semi-supervised segmentation model, namely, conservative-radical network (CoraNet) based on our uncertainty estimation and separate self-training strategy.
Compared with the current state of the art, our CoraNet has demonstrated superior performance.
arXiv Detail & Related papers (2021-10-17T08:49:33Z) - Uncertainty Quantification in Medical Image Segmentation with
Multi-decoder U-Net [3.961279440272763]
We exploit the medical image segmentation uncertainty by measuring segmentation performance with multiple annotations in a supervised learning manner.
We propose a U-Net based architecture with multiple decoders, where the image representation is encoded with the same encoder, and segmentation referring to each annotation is estimated with multiple decoders.
The proposed architecture is trained in an end-to-end manner and able to improve predictive uncertainty estimates.
arXiv Detail & Related papers (2021-09-15T01:46:29Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Uncertainty Quantification using Variational Inference for Biomedical Image Segmentation [0.0]
We use an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images.
We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient (DSC) and Intersection Over Union (IOU) as the evaluation metrics.
arXiv Detail & Related papers (2020-08-12T20:08:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.