Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation
- URL: http://arxiv.org/abs/2108.05422v1
- Date: Wed, 11 Aug 2021 19:24:40 GMT
- Title: Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation
- Authors: Ling Huang, Thierry Denoeux, David Tonnelet, Pierre Decazes, and Su
Ruan
- Abstract summary: Lymphoma detection and segmentation from PET/CT volumes are crucial for surgical indication and radiotherapy.
We propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer.
Our method get accurate segmentation results with Dice score of 0.726, without any user interaction.
- Score: 17.623576885481747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lymphoma detection and segmentation from whole-body Positron Emission
Tomography/Computed Tomography (PET/CT) volumes are crucial for surgical
indication and radiotherapy. Designing automatic segmentation methods capable
of effectively exploiting the information from PET and CT as well as resolving
their uncertainty remain a challenge. In this paper, we propose an lymphoma
segmentation model using an UNet with an evidential PET/CT fusion layer.
Single-modality volumes are trained separately to get initial segmentation maps
and an evidential fusion layer is proposed to fuse the two pieces of evidence
using Dempster-Shafer theory (DST). Moreover, a multi-task loss function is
proposed: in addition to the use of the Dice loss for PET and CT segmentation,
a loss function based on the concordance between the two segmentation is added
to constrain the final segmentation. We evaluate our proposal on a database of
polycentric PET/CT volumes of patients treated for lymphoma, delineated by the
experts. Our method get accurate segmentation results with Dice score of 0.726,
without any user interaction. Quantitative results show that our method is
superior to the state-of-the-art methods.
Related papers
- Multi-modal Evidential Fusion Network for Trusted PET/CT Tumor Segmentation [5.839660501978193]
The quality of PET and CT images varies widely in clinical settings, which leads to uncertainty in the modality information extracted by networks.
This paper proposes a novel Multi-modal Evidential Fusion Network (MEFN) comprising a Cross-Modal Feature Learning (CFL) module and a Multi-modal Trusted Fusion (MTF) module.
Our model can provide radiologists with credible uncertainty of the segmentation results for their decision in accepting or rejecting the automatic segmentation results.
arXiv Detail & Related papers (2024-06-26T13:14:24Z) - Self-calibrated convolution towards glioma segmentation [45.74830585715129]
We evaluate self-calibrated convolutions in different parts of the nnU-Net network to demonstrate that self-calibrated modules in skip connections can significantly improve the enhanced-tumor and tumor-core segmentation accuracy.
arXiv Detail & Related papers (2024-02-07T19:51:13Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Exploring Vanilla U-Net for Lesion Segmentation from Whole-body
FDG-PET/CT Scans [16.93163630413171]
Since FDG-PET scans only provide metabolic information, healthy tissue or benign disease with irregular glucose consumption may be mistaken for cancer.
In this paper, we explore the potential of U-Net for lesion segmentation in whole-body FDG-PET/CT scans from three aspects, including network architecture, data preprocessing, and data augmentation.
Our method achieves first place in both preliminary and final leaderboards of the autoPET 2022 challenge.
arXiv Detail & Related papers (2022-10-14T03:37:18Z) - Automatic Tumor Segmentation via False Positive Reduction Network for
Whole-Body Multi-Modal PET/CT Images [12.885308856495353]
In PET/CT image assessment, automatic tumor segmentation is an important step.
Existing methods tend to over-segment the tumor regions and include regions such as the normal high organs, inflammation, and other infections.
We introduce a false positive reduction network to overcome this limitation.
arXiv Detail & Related papers (2022-09-16T04:01:14Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19
Patients Using Deep Learning [133.87426554801252]
We adopted an approach based on using an ensemble of deep convolutionalneural networks for segmentation of lung CT scans.
Using our models we are able to segment the lesions, evaluatepatients dynamics, estimate relative volume of lungs affected by lesions and evaluate the lung damage stage.
arXiv Detail & Related papers (2021-05-25T12:06:55Z) - Evidential segmentation of 3D PET/CT images [20.65495780362289]
A segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images.
The architecture is composed of a feature extraction module and an evidential segmentation (ES) module.
The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma.
arXiv Detail & Related papers (2021-04-27T16:06:27Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.