Statistical inference of the inter-sample Dice distribution for
discriminative CNN brain lesion segmentation models
- URL: http://arxiv.org/abs/2012.02755v2
- Date: Fri, 19 Feb 2021 14:45:24 GMT
- Title: Statistical inference of the inter-sample Dice distribution for
discriminative CNN brain lesion segmentation models
- Authors: Kevin Raina
- Abstract summary: Discriminative convolutional neural networks (CNNs) have performed well in many brain lesion segmentation tasks.
segmentation sampling on discriminative CNNs is used to assess a trained model's robustness.
A rigorous confidence based decision rule is proposed to decide whether to reject or accept a CNN model for a particular patient.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discriminative convolutional neural networks (CNNs), for which a voxel-wise
conditional Multinoulli distribution is assumed, have performed well in many
brain lesion segmentation tasks. For a trained discriminative CNN to be used in
clinical practice, the patient's radiological features are inputted into the
model, in which case a conditional distribution of segmentations is produced.
Capturing the uncertainty of the predictions can be useful in deciding whether
to abandon a model, or choose amongst competing models. In practice, however,
we never know the ground truth segmentation, and therefore can never know the
true model variance. In this work, segmentation sampling on discriminative CNNs
is used to assess a trained model's robustness by analyzing the inter-sample
Dice distribution on a new patient solely based on their magnetic resonance
(MR) images. Furthermore, by demonstrating the inter-sample Dice observations
are independent and identically distributed with a finite mean and variance
under certain conditions, a rigorous confidence based decision rule is proposed
to decide whether to reject or accept a CNN model for a particular patient.
Applied to the ISLES 2015 (SISS) dataset, the model identified 7 predictions as
non-robust, and the average Dice coefficient calculated on the remaining brains
improved by 12 percent.
Related papers
- DiffSeg: A Segmentation Model for Skin Lesions Based on Diffusion Difference [2.9082809324784082]
We introduce DiffSeg, a segmentation model for skin lesions based on diffusion difference.
Its multi-output capability mimics doctors' annotation behavior, facilitating the visualization of segmentation result consistency and ambiguity.
We demonstrate the effectiveness of DiffSeg on the ISIC 2018 Challenge dataset, outperforming state-of-the-art U-Net-based methods.
arXiv Detail & Related papers (2024-04-25T09:57:52Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - How to Combine Variational Bayesian Networks in Federated Learning [0.0]
Federated learning enables multiple data centers to train a central model collaboratively without exposing any confidential data.
deterministic models are capable of performing high prediction accuracy, their lack of calibration and capability to quantify uncertainty is problematic for safety-critical applications.
We study the effects of various aggregation schemes for variational Bayesian neural networks.
arXiv Detail & Related papers (2022-06-22T07:53:12Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Siamese Neural Network with Joint Bayesian Model Structure for Speaker
Verification [54.96267179988487]
We propose a novel Siamese neural network (SiamNN) for speaker verification.
Joint distribution of samples is first formulated based on a joint Bayesian (JB) based generative model.
We further train the model parameters with the pair-wised samples as a binary discrimination task for speaker verification.
arXiv Detail & Related papers (2021-04-07T09:17:29Z) - Classification of fNIRS Data Under Uncertainty: A Bayesian Neural
Network Approach [0.15229257192293197]
We use a Bayesian Neural Network (BNN) to carry out a binary classification on an open-access dataset.
Our model produced an overall classification accuracy of 86.44% over 30 volunteers.
arXiv Detail & Related papers (2021-01-18T15:43:59Z) - CQ-VAE: Coordinate Quantized VAE for Uncertainty Estimation with
Application to Disk Shape Analysis from Lumbar Spine MRI Images [1.5841288368322592]
We propose a powerful generative model to learn a representation of ambiguity and to generate probabilistic outputs.
Our model, named Coordinate Quantization Variational Autoencoder (CQ-VAE), employs a discrete latent space with an internal discrete probability distribution.
A matching algorithm is used to establish the correspondence between model-generated samples and "ground-truth" samples.
arXiv Detail & Related papers (2020-10-17T04:25:32Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.