Improving Lesion Segmentation in FDG-18 Whole-Body PET/CT scans using
Multilabel approach: AutoPET II challenge
- URL: http://arxiv.org/abs/2311.01574v1
- Date: Thu, 2 Nov 2023 19:51:54 GMT
- Title: Improving Lesion Segmentation in FDG-18 Whole-Body PET/CT scans using
Multilabel approach: AutoPET II challenge
- Authors: Gowtham Krishnan Murugesan, Diana McCrumb, Eric Brunner, Jithendra
Kumar, Rahul Soni, Vasily Grigorash, Stephen Moore, and Jeff Van Oss
- Abstract summary: The presence of organs with elevated radiotracer uptake, such as the liver, spleen, brain, and bladder, often leads to challenges.
We propose a novel approach of segmenting both organs and lesions, aiming to enhance the performance of automatic lesion segmentation methods.
Our results demonstrate that our method achieved the top ranking in the held-out test dataset.
- Score: 0.039550447522409014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic segmentation of lesions in FDG-18 Whole Body (WB) PET/CT scans
using deep learning models is instrumental for determining treatment response,
optimizing dosimetry, and advancing theranostic applications in oncology.
However, the presence of organs with elevated radiotracer uptake, such as the
liver, spleen, brain, and bladder, often leads to challenges, as these regions
are often misidentified as lesions by deep learning models. To address this
issue, we propose a novel approach of segmenting both organs and lesions,
aiming to enhance the performance of automatic lesion segmentation methods. In
this study, we assessed the effectiveness of our proposed method using the
AutoPET II challenge dataset, which comprises 1014 subjects. We evaluated the
impact of inclusion of additional labels and data in the segmentation
performance of the model. In addition to the expert-annotated lesion labels, we
introduced eight additional labels for organs, including the liver, kidneys,
urinary bladder, spleen, lung, brain, heart, and stomach. These labels were
integrated into the dataset, and a 3D UNET model was trained within the nnUNet
framework. Our results demonstrate that our method achieved the top ranking in
the held-out test dataset, underscoring the potential of this approach to
significantly improve lesion segmentation accuracy in FDG-18 Whole-Body PET/CT
scans, ultimately benefiting cancer patients and advancing clinical practice.
Related papers
- AutoPET III Challenge: Tumor Lesion Segmentation using ResEnc-Model Ensemble [1.3467243219009812]
We trained a 3D Residual encoder U-Net within the no new U-Net framework to generalize the performance of automatic lesion segmentation.
We leveraged test-time augmentations and other post-processing techniques to enhance tumor lesion segmentation.
Our team currently hold the top position in the Auto-PET III challenge and outperformed the challenge baseline model in the preliminary test set with Dice score of 0.9627.
arXiv Detail & Related papers (2024-09-19T20:18:39Z) - Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT [4.376648893167674]
The autoPET III Challenge focuses on advancing automated segmentation of tumor lesions in PET/CT images.
We developed a classifier that identifies the tracer of the given PET/CT based on the Maximum Intensity Projection of the PET scan.
Our final submission achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly available FDG and PSMA datasets.
arXiv Detail & Related papers (2024-09-18T17:16:57Z) - From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging [0.9384264274298444]
We present our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture.
Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets.
Compared to the default nnU-Net, which achieved a Dice score of 57.61, our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes.
arXiv Detail & Related papers (2024-09-14T16:39:17Z) - AutoPET Challenge: Tumour Synthesis for Data Augmentation [26.236831356731017]
We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions.
Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data.
Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach.
arXiv Detail & Related papers (2024-09-12T14:23:19Z) - Shape Matters: Detecting Vertebral Fractures Using Differentiable
Point-Based Shape Decoding [51.38395069380457]
Degenerative spinal pathologies are highly prevalent among the elderly population.
Timely diagnosis of osteoporotic fractures and other degenerative deformities facilitates proactive measures to mitigate the risk of severe back pain and disability.
In this study, we specifically explore the use of shape auto-encoders for vertebrae.
arXiv Detail & Related papers (2023-12-08T18:11:22Z) - Adaptive Semi-Supervised Segmentation of Brain Vessels with Ambiguous
Labels [63.415444378608214]
Our approach incorporates innovative techniques including progressive semi-supervised learning, adaptative training strategy, and boundary enhancement.
Experimental results on 3DRA datasets demonstrate the superiority of our method in terms of mesh-based segmentation metrics.
arXiv Detail & Related papers (2023-08-07T14:16:52Z) - Whole-Body Lesion Segmentation in 18F-FDG PET/CT [11.662584140924725]
The proposed model is designed on the basis of the joint 2D and 3D nnUNET architecture to predict lesions across the whole body.
We evaluate the proposed method in the context of AutoPet Challenge, which measures the lesion segmentation performance in the metrics of dice score, false-positive volume and false-negative volume.
arXiv Detail & Related papers (2022-09-16T10:49:53Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.