Whole-Body Lesion Segmentation in 18F-FDG PET/CT
- URL: http://arxiv.org/abs/2209.07851v1
- Date: Fri, 16 Sep 2022 10:49:53 GMT
- Title: Whole-Body Lesion Segmentation in 18F-FDG PET/CT
- Authors: Jia Zhang, Yukun Huang, Zheng Zhang and Yuhang Shi
- Abstract summary: The proposed model is designed on the basis of the joint 2D and 3D nnUNET architecture to predict lesions across the whole body.
We evaluate the proposed method in the context of AutoPet Challenge, which measures the lesion segmentation performance in the metrics of dice score, false-positive volume and false-negative volume.
- Score: 11.662584140924725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been growing research interest in using deep learning based method
to achieve fully automated segmentation of lesion in Positron emission
tomography computed tomography(PET CT) scans for the prognosis of various
cancers. Recent advances in the medical image segmentation shows the nnUNET is
feasible for diverse tasks. However, lesion segmentation in the PET images is
not straightforward, because lesion and physiological uptake has similar
distribution patterns. The Distinction of them requires extra structural
information in the CT images. The present paper introduces a nnUNet based
method for the lesion segmentation task. The proposed model is designed on the
basis of the joint 2D and 3D nnUNET architecture to predict lesions across the
whole body. It allows for automated segmentation of potential lesions. We
evaluate the proposed method in the context of AutoPet Challenge, which
measures the lesion segmentation performance in the metrics of dice score,
false-positive volume and false-negative volume.
Related papers
- AutoPET III Challenge: Tumor Lesion Segmentation using ResEnc-Model Ensemble [1.3467243219009812]
We trained a 3D Residual encoder U-Net within the no new U-Net framework to generalize the performance of automatic lesion segmentation.
We leveraged test-time augmentations and other post-processing techniques to enhance tumor lesion segmentation.
Our team currently hold the top position in the Auto-PET III challenge and outperformed the challenge baseline model in the preliminary test set with Dice score of 0.9627.
arXiv Detail & Related papers (2024-09-19T20:18:39Z) - Weakly-Supervised Detection of Bone Lesions in CT [48.34559062736031]
The skeletal region is one of the common sites of metastatic spread of cancer in the breast and prostate.
We developed a pipeline to detect bone lesions in CT volumes via a proxy segmentation task.
Our method detected bone lesions in CT with a precision of 96.7% and recall of 47.3% despite the use of incomplete and partial training data.
arXiv Detail & Related papers (2024-01-31T21:05:34Z) - A Localization-to-Segmentation Framework for Automatic Tumor
Segmentation in Whole-Body PET/CT Images [8.0523823243864]
This paper proposes a localization-to-segmentation framework (L2SNet) for precise tumor segmentation.
L2SNet first localizes the possible lesions in the lesion localization phase and then uses the location cues to shape the segmentation results in the lesion segmentation phase.
Experiments with the MII Automated Lesion in Whole-Body FDG-PET/CT challenge dataset show that our method achieved a competitive result.
arXiv Detail & Related papers (2023-09-11T13:39:15Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Evidential segmentation of 3D PET/CT images [20.65495780362289]
A segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images.
The architecture is composed of a feature extraction module and an evidential segmentation (ES) module.
The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma.
arXiv Detail & Related papers (2021-04-27T16:06:27Z) - Learning Fuzzy Clustering for SPECT/CT Segmentation via Convolutional
Neural Networks [5.3123694982708365]
Quantitative bone single-photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy.
The segmentation of anatomical regions-of-interests (ROIs) still relies heavily on the manual delineation by experts.
This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background.
arXiv Detail & Related papers (2021-04-17T19:03:52Z) - Implanting Synthetic Lesions for Improving Liver Lesion Segmentation in
CT Exams [0.0]
We present a method for implanting realistic lesions in CT slices to provide a rich and controllable set of training samples.
We conclude that increasing the variability of lesions synthetically in terms of size, density, shape, and position seems to improve the performance of segmentation models for liver lesion segmentation in CT slices.
arXiv Detail & Related papers (2020-08-11T13:23:04Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z) - Residual Attention U-Net for Automated Multi-Class Segmentation of
COVID-19 Chest CT Images [46.844349956057776]
coronavirus disease 2019 (COVID-19) has been spreading rapidly around the world and caused significant impact on the public health and economy.
There is still lack of studies on effectively quantifying the lung infection caused by COVID-19.
We propose a novel deep learning algorithm for automated segmentation of multiple COVID-19 infection regions.
arXiv Detail & Related papers (2020-04-12T16:24:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.