ISA-Net: Improved spatial attention network for PET-CT tumor
segmentation
- URL: http://arxiv.org/abs/2211.02256v1
- Date: Fri, 4 Nov 2022 04:15:13 GMT
- Title: ISA-Net: Improved spatial attention network for PET-CT tumor
segmentation
- Authors: Zhengyong Huang, Sijuan Zou, Guoshuai Wang, Zixiang Chen, Hao Shen,
Haiyan Wang, Na Zhang, Lu Zhang, Fan Yang, Haining Wangg, Dong Liang, Tianye
Niu, Xiaohua Zhuc, Zhanli Hua
- Abstract summary: We propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT)
We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors.
We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset.
- Score: 22.48294544919023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving accurate and automated tumor segmentation plays an important role
in both clinical practice and radiomics research. Segmentation in medicine is
now often performed manually by experts, which is a laborious, expensive and
error-prone task. Manual annotation relies heavily on the experience and
knowledge of these experts. In addition, there is much intra- and interobserver
variation. Therefore, it is of great significance to develop a method that can
automatically segment tumor target regions. In this paper, we propose a deep
learning segmentation method based on multimodal positron emission
tomography-computed tomography (PET-CT), which combines the high sensitivity of
PET and the precise anatomical information of CT. We design an improved spatial
attention network(ISA-Net) to increase the accuracy of PET or CT in detecting
tumors, which uses multi-scale convolution operation to extract feature
information and can highlight the tumor region location information and
suppress the non-tumor region location information. In addition, our network
uses dual-channel inputs in the coding stage and fuses them in the decoding
stage, which can take advantage of the differences and complementarities
between PET and CT. We validated the proposed ISA-Net method on two clinical
datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR)
dataset, and compared with other attention methods for tumor segmentation. The
DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that
ISA-Net method achieves better segmentation performance and has better
generalization. Conclusions: The method proposed in this paper is based on
multi-modal medical image tumor segmentation, which can effectively utilize the
difference and complementarity of different modes. The method can also be
applied to other multi-modal data or single-modal data by proper adjustment.
Related papers
- Multi-modal Evidential Fusion Network for Trusted PET/CT Tumor Segmentation [5.839660501978193]
The quality of PET and CT images varies widely in clinical settings, which leads to uncertainty in the modality information extracted by networks.
This paper proposes a novel Multi-modal Evidential Fusion Network (MEFN) comprising a Cross-Modal Feature Learning (CFL) module and a Multi-modal Trusted Fusion (MTF) module.
Our model can provide radiologists with credible uncertainty of the segmentation results for their decision in accepting or rejecting the automatic segmentation results.
arXiv Detail & Related papers (2024-06-26T13:14:24Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - A Localization-to-Segmentation Framework for Automatic Tumor
Segmentation in Whole-Body PET/CT Images [8.0523823243864]
This paper proposes a localization-to-segmentation framework (L2SNet) for precise tumor segmentation.
L2SNet first localizes the possible lesions in the lesion localization phase and then uses the location cues to shape the segmentation results in the lesion segmentation phase.
Experiments with the MII Automated Lesion in Whole-Body FDG-PET/CT challenge dataset show that our method achieved a competitive result.
arXiv Detail & Related papers (2023-09-11T13:39:15Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Segmentation of Lung Tumor from CT Images using Deep Supervision [0.8733639720576208]
Lung cancer is a leading cause of death in most countries of the world.
This paper approaches lung tumor segmentation by applying two-dimensional discrete wavelet transform (DWT) on the LOTUS dataset.
arXiv Detail & Related papers (2021-11-17T17:50:18Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z) - A Novel and Efficient Tumor Detection Framework for Pancreatic Cancer
via CT Images [21.627818410241552]
A novel and efficient pancreatic tumor detection framework is proposed in this paper.
The contribution of the proposed method mainly consists of three components: Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation Module.
Experimental results achieve competitive performance in detection with the AUC of 0.9455, which outperforms other state-of-the-art methods to our best of knowledge.
arXiv Detail & Related papers (2020-02-11T15:48:22Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.