A Localization-to-Segmentation Framework for Automatic Tumor
Segmentation in Whole-Body PET/CT Images
- URL: http://arxiv.org/abs/2309.05446v2
- Date: Thu, 14 Sep 2023 14:30:04 GMT
- Title: A Localization-to-Segmentation Framework for Automatic Tumor
Segmentation in Whole-Body PET/CT Images
- Authors: Linghan Cai, Jianhao Huang, Zihang Zhu, Jinpeng Lu, and Yongbing Zhang
- Abstract summary: This paper proposes a localization-to-segmentation framework (L2SNet) for precise tumor segmentation.
L2SNet first localizes the possible lesions in the lesion localization phase and then uses the location cues to shape the segmentation results in the lesion segmentation phase.
Experiments with the MII Automated Lesion in Whole-Body FDG-PET/CT challenge dataset show that our method achieved a competitive result.
- Score: 8.0523823243864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fluorodeoxyglucose (FDG) positron emission tomography (PET) combined with
computed tomography (CT) is considered the primary solution for detecting some
cancers, such as lung cancer and melanoma. Automatic segmentation of tumors in
PET/CT images can help reduce doctors' workload, thereby improving diagnostic
quality. However, precise tumor segmentation is challenging due to the small
size of many tumors and the similarity of high-uptake normal areas to the tumor
regions. To address these issues, this paper proposes a
localization-to-segmentation framework (L2SNet) for precise tumor segmentation.
L2SNet first localizes the possible lesions in the lesion localization phase
and then uses the location cues to shape the segmentation results in the lesion
segmentation phase. To further improve the segmentation performance of L2SNet,
we design an adaptive threshold scheme that takes the segmentation results of
the two phases into consideration. The experiments with the MICCAI 2023
Automated Lesion Segmentation in Whole-Body FDG-PET/CT challenge dataset show
that our method achieved a competitive result and was ranked in the top 7
methods on the preliminary test set. Our work is available at:
https://github.com/MedCAI/L2SNet.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Generative Adversarial Networks for Weakly Supervised Generation and Evaluation of Brain Tumor Segmentations on MR Images [0.0]
This work presents a weakly supervised approach to segment anomalies in 2D magnetic resonance images.
We train a generative adversarial network (GAN) that converts cancerous images to healthy variants.
Non-cancerous variants can also be used to evaluate the segmentations in a weakly supervised fashion.
arXiv Detail & Related papers (2022-11-10T00:04:46Z) - ISA-Net: Improved spatial attention network for PET-CT tumor
segmentation [22.48294544919023]
We propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT)
We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors.
We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset.
arXiv Detail & Related papers (2022-11-04T04:15:13Z) - Whole-Body Lesion Segmentation in 18F-FDG PET/CT [11.662584140924725]
The proposed model is designed on the basis of the joint 2D and 3D nnUNET architecture to predict lesions across the whole body.
We evaluate the proposed method in the context of AutoPet Challenge, which measures the lesion segmentation performance in the metrics of dice score, false-positive volume and false-negative volume.
arXiv Detail & Related papers (2022-09-16T10:49:53Z) - Automatic Tumor Segmentation via False Positive Reduction Network for
Whole-Body Multi-Modal PET/CT Images [12.885308856495353]
In PET/CT image assessment, automatic tumor segmentation is an important step.
Existing methods tend to over-segment the tumor regions and include regions such as the normal high organs, inflammation, and other infections.
We introduce a false positive reduction network to overcome this limitation.
arXiv Detail & Related papers (2022-09-16T04:01:14Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.