Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation
- URL: http://arxiv.org/abs/2007.14728v2
- Date: Thu, 6 Aug 2020 04:50:27 GMT
- Title: Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation
- Authors: Xiaohang Fu, Lei Bi, Ashnil Kumar, Michael Fulham and Jinman Kim
- Abstract summary: Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
- Score: 11.622615048002567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal positron emission tomography-computed tomography (PET-CT) is used
routinely in the assessment of cancer. PET-CT combines the high sensitivity for
tumor detection with PET and anatomical information from CT. Tumor segmentation
is a critical element of PET-CT but at present, there is not an accurate
automated segmentation method. Segmentation tends to be done manually by
different imaging experts and it is labor-intensive and prone to errors and
inconsistency. Previous automated segmentation methods largely focused on
fusing information that is extracted separately from the PET and CT modalities,
with the underlying assumption that each modality contains complementary
information. However, these methods do not fully exploit the high PET tumor
sensitivity that can guide the segmentation. We introduce a multimodal spatial
attention module (MSAM) that automatically learns to emphasize regions (spatial
areas) related to tumors and suppress normal regions with physiologic
high-uptake. The resulting spatial attention maps are subsequently employed to
target a convolutional neural network (CNN) for segmentation of areas with
higher tumor likelihood. Our MSAM can be applied to common backbone
architectures and trained end-to-end. Our experimental results on two clinical
PET-CT datasets of non-small cell lung cancer (NSCLC) and soft tissue sarcoma
(STS) validate the effectiveness of the MSAM in these different cancer types.
We show that our MSAM, with a conventional U-Net backbone, surpasses the
state-of-the-art lung tumor segmentation approach by a margin of 7.6% in Dice
similarity coefficient (DSC).
Related papers
- A cascaded deep network for automated tumor detection and segmentation
in clinical PET imaging of diffuse large B-cell lymphoma [0.41579653852022364]
We develop and validate a fast and efficient three-step cascaded deep learning model for automated detection and segmentation of DLBCL tumors from PET images.
Our model is more effective than a single end-to-end network for segmentation of tumors in whole-body PET images.
arXiv Detail & Related papers (2024-03-11T18:36:55Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - A Localization-to-Segmentation Framework for Automatic Tumor
Segmentation in Whole-Body PET/CT Images [8.0523823243864]
This paper proposes a localization-to-segmentation framework (L2SNet) for precise tumor segmentation.
L2SNet first localizes the possible lesions in the lesion localization phase and then uses the location cues to shape the segmentation results in the lesion segmentation phase.
Experiments with the MII Automated Lesion in Whole-Body FDG-PET/CT challenge dataset show that our method achieved a competitive result.
arXiv Detail & Related papers (2023-09-11T13:39:15Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - ISA-Net: Improved spatial attention network for PET-CT tumor
segmentation [22.48294544919023]
We propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT)
We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors.
We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset.
arXiv Detail & Related papers (2022-11-04T04:15:13Z) - Automatic Tumor Segmentation via False Positive Reduction Network for
Whole-Body Multi-Modal PET/CT Images [12.885308856495353]
In PET/CT image assessment, automatic tumor segmentation is an important step.
Existing methods tend to over-segment the tumor regions and include regions such as the normal high organs, inflammation, and other infections.
We introduce a false positive reduction network to overcome this limitation.
arXiv Detail & Related papers (2022-09-16T04:01:14Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Multi-Scale Input Strategies for Medulloblastoma Tumor Classification
using Deep Transfer Learning [59.30734371401316]
Medulloblastoma is the most common malignant brain cancer among children.
CNN has shown promising results for MB subtype classification.
We study the impact of tile size and input strategy.
arXiv Detail & Related papers (2021-09-14T09:42:37Z) - Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation [17.623576885481747]
Lymphoma detection and segmentation from PET/CT volumes are crucial for surgical indication and radiotherapy.
We propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer.
Our method get accurate segmentation results with Dice score of 0.726, without any user interaction.
arXiv Detail & Related papers (2021-08-11T19:24:40Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.