Evidential segmentation of 3D PET/CT images
- URL: http://arxiv.org/abs/2104.13293v1
- Date: Tue, 27 Apr 2021 16:06:27 GMT
- Title: Evidential segmentation of 3D PET/CT images
- Authors: Ling Huang, Su Ruan, Pierre Decazes, Thierry Denoeux
- Abstract summary: A segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images.
The architecture is composed of a feature extraction module and an evidential segmentation (ES) module.
The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma.
- Score: 20.65495780362289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: PET and CT are two modalities widely used in medical image analysis.
Accurately detecting and segmenting lymphomas from these two imaging modalities
are critical tasks for cancer staging and radiotherapy planning. However, this
task is still challenging due to the complexity of PET/CT images, and the
computation cost to process 3D data. In this paper, a segmentation method based
on belief functions is proposed to segment lymphomas in 3D PET/CT images. The
architecture is composed of a feature extraction module and an evidential
segmentation (ES) module. The ES module outputs not only segmentation results
(binary maps indicating the presence or absence of lymphoma in each voxel) but
also uncertainty maps quantifying the classification uncertainty. The whole
model is optimized by minimizing Dice and uncertainty loss functions to
increase segmentation accuracy. The method was evaluated on a database of 173
patients with diffuse large b-cell lymphoma. Quantitative and qualitative
results show that our method outperforms the state-of-the-art methods.
Related papers
- Multi-modal Evidential Fusion Network for Trusted PET/CT Tumor Segmentation [5.839660501978193]
The quality of PET and CT images varies widely in clinical settings, which leads to uncertainty in the modality information extracted by networks.
This paper proposes a novel Multi-modal Evidential Fusion Network (MEFN) comprising a Cross-Modal Feature Learning (CFL) module and a Multi-modal Trusted Fusion (MTF) module.
Our model can provide radiologists with credible uncertainty of the segmentation results for their decision in accepting or rejecting the automatic segmentation results.
arXiv Detail & Related papers (2024-06-26T13:14:24Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Diff-UNet: A Diffusion Embedded Network for Volumetric Segmentation [41.608617301275935]
We propose a novel end-to-end framework, called Diff-UNet, for medical volumetric segmentation.
Our approach integrates the diffusion model into a standard U-shaped architecture to extract semantic information from the input volume effectively.
We evaluate our method on three datasets, including multimodal brain tumors in MRI, liver tumors, and multi-organ CT volumes.
arXiv Detail & Related papers (2023-03-18T04:06:18Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Whole-Body Lesion Segmentation in 18F-FDG PET/CT [11.662584140924725]
The proposed model is designed on the basis of the joint 2D and 3D nnUNET architecture to predict lesions across the whole body.
We evaluate the proposed method in the context of AutoPet Challenge, which measures the lesion segmentation performance in the metrics of dice score, false-positive volume and false-negative volume.
arXiv Detail & Related papers (2022-09-16T10:49:53Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Lymphoma segmentation from 3D PET-CT images using a deep evidential
network [20.65641432056608]
An automatic evidential segmentation method is proposed to segment lymphomas from 3D Positron Emission Tomography (PET) and Computed Tomography (CT) images.
The architecture is composed of a deep feature-extraction module and an evidential layer.
The proposed combination of deep feature extraction and evidential segmentation is shown to outperform the baseline UNet model.
arXiv Detail & Related papers (2022-01-31T09:34:38Z) - Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation [17.623576885481747]
Lymphoma detection and segmentation from PET/CT volumes are crucial for surgical indication and radiotherapy.
We propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer.
Our method get accurate segmentation results with Dice score of 0.726, without any user interaction.
arXiv Detail & Related papers (2021-08-11T19:24:40Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.