AutoPET III Challenge: Tumor Lesion Segmentation using ResEnc-Model Ensemble
- URL: http://arxiv.org/abs/2409.13779v1
- Date: Thu, 19 Sep 2024 20:18:39 GMT
- Title: AutoPET III Challenge: Tumor Lesion Segmentation using ResEnc-Model Ensemble
- Authors: Tanya Chutani, Saikiran Bonthu, Pranab Samanta, Nitin Singhal,
- Abstract summary: We trained a 3D Residual encoder U-Net within the no new U-Net framework to generalize the performance of automatic lesion segmentation.
We leveraged test-time augmentations and other post-processing techniques to enhance tumor lesion segmentation.
Our team currently hold the top position in the Auto-PET III challenge and outperformed the challenge baseline model in the preliminary test set with Dice score of 0.9627.
- Score: 1.3467243219009812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Positron Emission Tomography (PET) /Computed Tomography (CT) is crucial for diagnosing, managing, and planning treatment for various cancers. Developing reliable deep learning models for the segmentation of tumor lesions in PET/CT scans in a multi-tracer multicenter environment, is a critical area of research. Different tracers, such as Fluorodeoxyglucose (FDG) and Prostate-Specific Membrane Antigen (PSMA), have distinct physiological uptake patterns and data from different centers often vary in terms of acquisition protocols, scanner types, and patient populations. Because of this variability, it becomes more difficult to design reliable segmentation algorithms and generalization techniques due to variations in image quality and lesion detectability. To address this challenge, We trained a 3D Residual encoder U-Net within the no new U-Net framework, aiming to generalize the performance of automatic lesion segmentation of whole body PET/CT scans, across different tracers and clinical sites. Further, We explored several preprocessing techniques and ultimately settled on using the Total Segmentator to crop our training data. Additionally, we applied resampling during this process. During inference, we leveraged test-time augmentations and other post-processing techniques to enhance tumor lesion segmentation. Our team currently hold the top position in the Auto-PET III challenge and outperformed the challenge baseline model in the preliminary test set with Dice score of 0.9627.
Related papers
- Sine Wave Normalization for Deep Learning-Based Tumor Segmentation in CT/PET Imaging [2.482413309706322]
This report presents a normalization block for automated tumor segmentation in CT/PET scans, developed for the autoPET III Challenge.
The key innovation is the introduction of the SineNormal, which applies periodic sine transformations to PET data to enhance lesion detection.
arXiv Detail & Related papers (2024-09-20T11:20:11Z) - Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT [4.376648893167674]
The autoPET III Challenge focuses on advancing automated segmentation of tumor lesions in PET/CT images.
We developed a classifier that identifies the tracer of the given PET/CT based on the Maximum Intensity Projection of the PET scan.
Our final submission achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly available FDG and PSMA datasets.
arXiv Detail & Related papers (2024-09-18T17:16:57Z) - From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging [0.9384264274298444]
We present our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture.
Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets.
Compared to the default nnU-Net, which achieved a Dice score of 57.61, our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes.
arXiv Detail & Related papers (2024-09-14T16:39:17Z) - AutoPET Challenge: Tumour Synthesis for Data Augmentation [26.236831356731017]
We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions.
Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data.
Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach.
arXiv Detail & Related papers (2024-09-12T14:23:19Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Whole-Body Lesion Segmentation in 18F-FDG PET/CT [11.662584140924725]
The proposed model is designed on the basis of the joint 2D and 3D nnUNET architecture to predict lesions across the whole body.
We evaluate the proposed method in the context of AutoPet Challenge, which measures the lesion segmentation performance in the metrics of dice score, false-positive volume and false-negative volume.
arXiv Detail & Related papers (2022-09-16T10:49:53Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z) - Spatio-spectral deep learning methods for in-vivo hyperspectral
laryngeal cancer detection [49.32653090178743]
Early detection of head and neck tumors is crucial for patient survival.
Hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors.
We present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI.
arXiv Detail & Related papers (2020-04-21T17:07:18Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.