Lymphoma segmentation from 3D PET-CT images using a deep evidential
network
- URL: http://arxiv.org/abs/2201.13078v1
- Date: Mon, 31 Jan 2022 09:34:38 GMT
- Title: Lymphoma segmentation from 3D PET-CT images using a deep evidential
network
- Authors: Ling Huang, Su Ruan, Pierre Decazes, Thierry Denoeux
- Abstract summary: An automatic evidential segmentation method is proposed to segment lymphomas from 3D Positron Emission Tomography (PET) and Computed Tomography (CT) images.
The architecture is composed of a deep feature-extraction module and an evidential layer.
The proposed combination of deep feature extraction and evidential segmentation is shown to outperform the baseline UNet model.
- Score: 20.65641432056608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An automatic evidential segmentation method based on Dempster-Shafer theory
and deep learning is proposed to segment lymphomas from three-dimensional
Positron Emission Tomography (PET) and Computed Tomography (CT) images. The
architecture is composed of a deep feature-extraction module and an evidential
layer. The feature extraction module uses an encoder-decoder framework to
extract semantic feature vectors from 3D inputs. The evidential layer then uses
prototypes in the feature space to compute a belief function at each voxel
quantifying the uncertainty about the presence or absence of a lymphoma at this
location. Two evidential layers are compared, based on different ways of using
distances to prototypes for computing mass functions. The whole model is
trained end-to-end by minimizing the Dice loss function. The proposed
combination of deep feature extraction and evidential segmentation is shown to
outperform the baseline UNet model as well as three other state-of-the-art
models on a dataset of 173 patients.
Related papers
- Multi-objective point cloud autoencoders for explainable myocardial
infarction prediction [4.65840670565844]
Myocardial infarction is one of the most common causes of death in the world.
Image-based biomarkers fail to capture more complex patterns in the heart's 3D anatomy.
We present the multi-objective point cloud autoencoder as a novel geometric deep learning approach for explainable infarction prediction.
arXiv Detail & Related papers (2023-07-20T16:45:16Z) - Unsupervised Discovery of 3D Hierarchical Structure with Generative
Diffusion Features [22.657405088126012]
We show that features of diffusion models capture different hierarchy levels in 3D biomedical images.
We train a predictive unsupervised segmentation network that encourages the decomposition of 3D volumes into meaningful nested subvolumes.
Our models achieve better performance than prior unsupervised structure discovery approaches on challenging synthetic datasets and on a real-world brain tumor MRI dataset.
arXiv Detail & Related papers (2023-04-28T19:37:17Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal
Vessel Segmentation [0.0]
3D to 2D retinal vessel segmentation is a challenging problem in Optical Coherence Tomography Angiography ( OCTA) images.
We propose a Progressive Attention-Enhanced Network (PAENet) based on attention mechanisms to extract rich feature representation.
Our proposed algorithm achieves state-of-the-art performance compared with previous methods.
arXiv Detail & Related papers (2021-08-26T10:27:25Z) - Aug3D-RPN: Improving Monocular 3D Object Detection by Synthetic Images
with Virtual Depth [64.29043589521308]
We propose a rendering module to augment the training data by synthesizing images with virtual-depths.
The rendering module takes as input the RGB image and its corresponding sparse depth image, outputs a variety of photo-realistic synthetic images.
Besides, we introduce an auxiliary module to improve the detection model by jointly optimizing it through a depth estimation task.
arXiv Detail & Related papers (2021-07-28T11:00:47Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Evidential segmentation of 3D PET/CT images [20.65495780362289]
A segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images.
The architecture is composed of a feature extraction module and an evidential segmentation (ES) module.
The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma.
arXiv Detail & Related papers (2021-04-27T16:06:27Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z) - Cylindrical Convolutional Networks for Joint Object Detection and
Viewpoint Estimation [76.21696417873311]
We introduce a learnable module, cylindrical convolutional networks (CCNs), that exploit cylindrical representation of a convolutional kernel defined in the 3D space.
CCNs extract a view-specific feature through a view-specific convolutional kernel to predict object category scores at each viewpoint.
Our experiments demonstrate the effectiveness of the cylindrical convolutional networks on joint object detection and viewpoint estimation.
arXiv Detail & Related papers (2020-03-25T10:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.