Vision Transformer-based Multimodal Feature Fusion Network for Lymphoma
Segmentation on PET/CT Images
- URL: http://arxiv.org/abs/2402.02349v1
- Date: Sun, 4 Feb 2024 05:25:12 GMT
- Title: Vision Transformer-based Multimodal Feature Fusion Network for Lymphoma
Segmentation on PET/CT Images
- Authors: Huan Huang, Liheng Qiu, Shenmiao Yang, Longxi Li, Jiaofen Nan, Yanting
Li, Chuang Han, Fubao Zhu, Chen Zhao, Weihua Zhou
- Abstract summary: We aim to develop an accurate method for lymphoma segmentation with 18F-Fluorodeoxyglucose positron emission tomography (PET) and computed tomography (CT) images.
Our lymphoma segmentation approach combines a vision transformer with dual encoders, adeptly fusing PET and CT data via multimodal cross-attention fusion (MMCAF) module.
- Score: 6.715992297496958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: Diffuse large B-cell lymphoma (DLBCL) segmentation is a challenge
in medical image analysis. Traditional segmentation methods for lymphoma
struggle with the complex patterns and the presence of DLBCL lesions.
Objective: We aim to develop an accurate method for lymphoma segmentation with
18F-Fluorodeoxyglucose positron emission tomography (PET) and computed
tomography (CT) images. Methods: Our lymphoma segmentation approach combines a
vision transformer with dual encoders, adeptly fusing PET and CT data via
multimodal cross-attention fusion (MMCAF) module. In this study, PET and CT
data from 165 DLBCL patients were analyzed. A 5-fold cross-validation was
employed to evaluate the performance and generalization ability of our method.
Ground truths were annotated by experienced nuclear medicine experts. We
calculated the total metabolic tumor volume (TMTV) and performed a statistical
analysis on our results. Results: The proposed method exhibited accurate
performance in DLBCL lesion segmentation, achieving a Dice similarity
coefficient of 0.9173$\pm$0.0071, a Hausdorff distance of 2.71$\pm$0.25mm, a
sensitivity of 0.9462$\pm$0.0223, and a specificity of 0.9986$\pm$0.0008.
Additionally, a Pearson correlation coefficient of 0.9030$\pm$0.0179 and an
R-square of 0.8586$\pm$0.0173 were observed in TMTV when measured on manual
annotation compared to our segmentation results. Conclusion: This study
highlights the advantages of MMCAF and vision transformer for lymphoma
segmentation using PET and CT, offering great promise for computer-aided
lymphoma diagnosis and treatment.
Related papers
- Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Patients Using a Longitudinally-Aware Segmentation Network [7.225391135995692]
longitudinally-aware segmentation network (LAS-Net) can quantify serial PET/CT images for pediatric Hodgkin lymphoma patients.
LAS-Net detected residual lymphoma in PET2 with an F1 score of 0.606.
LAS-Net's measurements of qPET, $Delta$SUVmax, MTV and TLG were strongly correlated with physician measurements.
arXiv Detail & Related papers (2024-04-12T17:20:57Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - Whole-body tumor segmentation of 18F -FDG PET/CT using a cascaded and
ensembled convolutional neural networks [2.735686397209314]
The goal of this study was to report the performance of a deep neural network designed to automatically segment regions suspected of cancer in whole-body 18F-FDG PET/CT images.
A cascaded approach was developed where a stacked ensemble of 3D UNET CNN processed the PET/CT images at a fixed 6mm resolution.
arXiv Detail & Related papers (2022-10-14T19:25:56Z) - Corneal endothelium assessment in specular microscopy images with Fuchs'
dystrophy via deep regression of signed distance maps [48.498376125522114]
This paper proposes a UNet-based segmentation approach that requires minimal post-processing.
It achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy.
arXiv Detail & Related papers (2022-10-13T15:34:20Z) - PriorNet: lesion segmentation in PET-CT including prior tumor appearance
information [0.0]
We propose a two-step approach to improve the segmentation performances of tumoral lesions in PET-CT images.
The first step generates a prior tumor appearance map from the PET-CT volumes, regarded as prior tumor information.
The second step, consisting of a standard U-Net, receives the prior tumor appearance map and PET-CT images to generate the lesion mask.
arXiv Detail & Related papers (2022-10-05T12:31:42Z) - Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation [17.623576885481747]
Lymphoma detection and segmentation from PET/CT volumes are crucial for surgical indication and radiotherapy.
We propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer.
Our method get accurate segmentation results with Dice score of 0.726, without any user interaction.
arXiv Detail & Related papers (2021-08-11T19:24:40Z) - Evidential segmentation of 3D PET/CT images [20.65495780362289]
A segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images.
The architecture is composed of a feature extraction module and an evidential segmentation (ES) module.
The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma.
arXiv Detail & Related papers (2021-04-27T16:06:27Z) - M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia
Screening from CT Imaging [85.00066186644466]
We propose a Multi-task Multi-slice Deep Learning System (M3Lung-Sys) for multi-class lung pneumonia screening from CT imaging.
In addition to distinguish COVID-19 from Healthy, H1N1, and CAP cases, our M 3 Lung-Sys also be able to locate the areas of relevant lesions.
arXiv Detail & Related papers (2020-10-07T06:22:24Z) - Machine Learning Automatically Detects COVID-19 using Chest CTs in a
Large Multicenter Cohort [43.99203831722203]
Our retrospective study obtained 2096 chest CTs from 16 institutions.
A metric-based approach for classification of COVID-19 used interpretable features.
A deep learning-based classifier differentiated COVID-19 via 3D features extracted from CT attenuation and probability distribution of airspace opacities.
arXiv Detail & Related papers (2020-06-09T00:40:35Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.