A cascaded deep network for automated tumor detection and segmentation
in clinical PET imaging of diffuse large B-cell lymphoma
- URL: http://arxiv.org/abs/2403.07092v1
- Date: Mon, 11 Mar 2024 18:36:55 GMT
- Title: A cascaded deep network for automated tumor detection and segmentation
in clinical PET imaging of diffuse large B-cell lymphoma
- Authors: Shadab Ahamed, Natalia Dubljevic, Ingrid Bloise, Claire Gowdy, Patrick
Martineau, Don Wilson, Carlos F. Uribe, Arman Rahmim, and Fereshteh
Yousefirizi
- Abstract summary: We develop and validate a fast and efficient three-step cascaded deep learning model for automated detection and segmentation of DLBCL tumors from PET images.
Our model is more effective than a single end-to-end network for segmentation of tumors in whole-body PET images.
- Score: 0.41579653852022364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate detection and segmentation of diffuse large B-cell lymphoma (DLBCL)
from PET images has important implications for estimation of total metabolic
tumor volume, radiomics analysis, surgical intervention and radiotherapy.
Manual segmentation of tumors in whole-body PET images is time-consuming,
labor-intensive and operator-dependent. In this work, we develop and validate a
fast and efficient three-step cascaded deep learning model for automated
detection and segmentation of DLBCL tumors from PET images. As compared to a
single end-to-end network for segmentation of tumors in whole-body PET images,
our three-step model is more effective (improves 3D Dice score from 58.9% to
78.1%) since each of its specialized modules, namely the slice classifier, the
tumor detector and the tumor segmentor, can be trained independently to a high
degree of skill to carry out a specific task, rather than a single network with
suboptimal performance on overall segmentation.
Related papers
- AutoPET III Challenge: Tumor Lesion Segmentation using ResEnc-Model Ensemble [1.3467243219009812]
We trained a 3D Residual encoder U-Net within the no new U-Net framework to generalize the performance of automatic lesion segmentation.
We leveraged test-time augmentations and other post-processing techniques to enhance tumor lesion segmentation.
Our team currently hold the top position in the Auto-PET III challenge and outperformed the challenge baseline model in the preliminary test set with Dice score of 0.9627.
arXiv Detail & Related papers (2024-09-19T20:18:39Z) - Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT [4.376648893167674]
The autoPET III Challenge focuses on advancing automated segmentation of tumor lesions in PET/CT images.
We developed a classifier that identifies the tracer of the given PET/CT based on the Maximum Intensity Projection of the PET scan.
Our final submission achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly available FDG and PSMA datasets.
arXiv Detail & Related papers (2024-09-18T17:16:57Z) - Towards Generalizable Tumor Synthesis [48.45704270448412]
Tumor synthesis enables the creation of artificial tumors in medical images, facilitating the training of AI models for tumor detection and segmentation.
This paper made a progressive stride toward generalizable tumor synthesis by leveraging a critical observation.
We have ascertained that generative AI models, e.g., Diffusion Models, can create realistic tumors generalized to a range of organs even when trained on a limited number of tumor examples from only one organ.
arXiv Detail & Related papers (2024-02-29T18:57:39Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Whole-body tumor segmentation of 18F -FDG PET/CT using a cascaded and
ensembled convolutional neural networks [2.735686397209314]
The goal of this study was to report the performance of a deep neural network designed to automatically segment regions suspected of cancer in whole-body 18F-FDG PET/CT images.
A cascaded approach was developed where a stacked ensemble of 3D UNET CNN processed the PET/CT images at a fixed 6mm resolution.
arXiv Detail & Related papers (2022-10-14T19:25:56Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Evidential segmentation of 3D PET/CT images [20.65495780362289]
A segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images.
The architecture is composed of a feature extraction module and an evidential segmentation (ES) module.
The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma.
arXiv Detail & Related papers (2021-04-27T16:06:27Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.