Probabilistic 3D segmentation for aleatoric uncertainty quantification
in full 3D medical data
- URL: http://arxiv.org/abs/2305.00950v1
- Date: Mon, 1 May 2023 17:19:20 GMT
- Title: Probabilistic 3D segmentation for aleatoric uncertainty quantification
in full 3D medical data
- Authors: Christiaan G. A. Viviers, Amaan M. M. Valiuddin, Peter H. N. de With,
Fons van der Sommen
- Abstract summary: We develop a 3D probabilistic segmentation framework augmented with Normalizing Flows.
We are the first to present a 3D Squared Generalized Energy Distance (GED) of 0.401 and a high 0.468 Hungarian-matched 3D IoU.
- Score: 7.615431940103322
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Uncertainty quantification in medical images has become an essential addition
to segmentation models for practical application in the real world. Although
there are valuable developments in accurate uncertainty quantification methods
using 2D images and slices of 3D volumes, in clinical practice, the complete 3D
volumes (such as CT and MRI scans) are used to evaluate and plan the medical
procedure. As a result, the existing 2D methods miss the rich 3D spatial
information when resolving the uncertainty. A popular approach for quantifying
the ambiguity in the data is to learn a distribution over the possible
hypotheses. In recent work, this ambiguity has been modeled to be strictly
Gaussian. Normalizing Flows (NFs) are capable of modelling more complex
distributions and thus, better fit the embedding space of the data. To this
end, we have developed a 3D probabilistic segmentation framework augmented with
NFs, to enable capturing the distributions of various complexity. To test the
proposed approach, we evaluate the model on the LIDC-IDRI dataset for lung
nodule segmentation and quantify the aleatoric uncertainty introduced by the
multi-annotator setting and inherent ambiguity in the CT data. Following this
approach, we are the first to present a 3D Squared Generalized Energy Distance
(GED) of 0.401 and a high 0.468 Hungarian-matched 3D IoU. The obtained results
reveal the value in capturing the 3D uncertainty, using a flexible posterior
distribution augmented with a Normalizing Flow. Finally, we present the
aleatoric uncertainty in a visual manner with the aim to provide clinicians
with additional insight into data ambiguity and facilitating more informed
decision-making.
Related papers
- Resolution-Robust 3D MRI Reconstruction with 2D Diffusion Priors: Diverse-Resolution Training Outperforms Interpolation [18.917672392645006]
2D diffusion models trained on 2D slices are starting to be leveraged for 3D MRI reconstruction.
Existing methods pertain to a fixed voxel size, and performance degrades when the voxel size is varied.
We propose and study several approaches for resolution-robust 3D MRI reconstruction with 2D diffusion priors.
arXiv Detail & Related papers (2024-12-24T18:25:50Z) - DSplats: 3D Generation by Denoising Splats-Based Multiview Diffusion Models [67.50989119438508]
We introduce DSplats, a novel method that directly denoises multiview images using Gaussian-based Reconstructors to produce realistic 3D assets.
Our experiments demonstrate that DSplats not only produces high-quality, spatially consistent outputs, but also sets a new standard in single-image to 3D reconstruction.
arXiv Detail & Related papers (2024-12-11T07:32:17Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - Stochastic Segmentation Networks: Modelling Spatially Correlated
Aleatoric Uncertainty [32.33791302617957]
We introduce segmentation networks (SSNs), an efficient probabilistic method for modelling aleatoric uncertainty with any image segmentation network architecture.
SSNs can generate multiple spatially coherent hypotheses for a single image.
We tested our method on the segmentation of real-world medical data, including lung nodules in 2D CT and brain tumours in 3D multimodal MRI scans.
arXiv Detail & Related papers (2020-06-10T18:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.