AutoPET Challenge: Tumour Synthesis for Data Augmentation
- URL: http://arxiv.org/abs/2409.08068v1
- Date: Thu, 12 Sep 2024 14:23:19 GMT
- Title: AutoPET Challenge: Tumour Synthesis for Data Augmentation
- Authors: Lap Yan Lennon Chan, Chenxin Li, Yixuan Yuan,
- Abstract summary: We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions.
Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data.
Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach.
- Score: 26.236831356731017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate lesion segmentation in whole-body PET/CT scans is crucial for cancer diagnosis and treatment planning, but limited datasets often hinder the performance of automated segmentation models. In this paper, we explore the potential of leveraging the deep prior from a generative model to serve as a data augmenter for automated lesion segmentation in PET/CT scans. We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions. Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data. We then compare the performance of segmentation models trained on the original and augmented datasets. Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach. In a nutshell, this work presents a promising direction for improving lesion segmentation in whole-body PET/CT scans with limited datasets, potentially enhancing the accuracy and reliability of cancer diagnostics.
Related papers
- Data Diet: Can Trimming PET/CT Datasets Enhance Lesion Segmentation? [68.51625420337575]
We describe our approach to compete in the autoPET3 datacentric track.
We find that in the autoPETIII dataset, a model that is trained on the entire dataset exhibits undesirable characteristics.
We counteract this by removing the easiest samples from the training dataset as measured by the model loss before retraining from scratch.
arXiv Detail & Related papers (2024-09-20T14:47:58Z) - AutoPET III Challenge: Tumor Lesion Segmentation using ResEnc-Model Ensemble [1.3467243219009812]
We trained a 3D Residual encoder U-Net within the no new U-Net framework to generalize the performance of automatic lesion segmentation.
We leveraged test-time augmentations and other post-processing techniques to enhance tumor lesion segmentation.
Our team currently hold the top position in the Auto-PET III challenge and outperformed the challenge baseline model in the preliminary test set with Dice score of 0.9627.
arXiv Detail & Related papers (2024-09-19T20:18:39Z) - Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT [4.376648893167674]
The autoPET III Challenge focuses on advancing automated segmentation of tumor lesions in PET/CT images.
We developed a classifier that identifies the tracer of the given PET/CT based on the Maximum Intensity Projection of the PET scan.
Our final submission achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly available FDG and PSMA datasets.
arXiv Detail & Related papers (2024-09-18T17:16:57Z) - Enhancing Lesion Segmentation in PET/CT Imaging with Deep Learning and Advanced Data Preprocessing Techniques [2.4549652987344546]
This research employs deep learning to enhance lesion segmentation in PET/CT imaging.
Our methodical approach includes robust preprocessing and data augmentation techniques to ensure model robustness and generalizability.
This study aims to contribute to the standardization of preprocessing and augmentation strategies in PET/CT imaging.
arXiv Detail & Related papers (2024-09-15T16:27:34Z) - Deep Learning-Based Segmentation of Tumors in PET/CT Volumes: Benchmark of Different Architectures and Training Strategies [0.12301374769426145]
This study examines various neural network architectures and training strategies for automatically segmentation of cancer lesions.
V-Net and nnU-Net models were the most effective for their respective datasets.
Eliminating cancer-free cases from the AutoPET dataset was found to improve the performance of most models.
arXiv Detail & Related papers (2024-04-15T13:03:42Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Improved automated lesion segmentation in whole-body FDG/PET-CT via
Test-Time Augmentation [5.206955554317389]
Oncology indications have extensively quantified metabolically active tumors using positron emission tomography (PET) and computed tomography (CT)
In this study, we investigate the potential benefits of test-time augmentation for segmenting tumors from PET-CT pairings.
We train U-Net and Swin U-Netr on the training database to determine how different test time augmentation improved segmentation performance.
arXiv Detail & Related papers (2022-10-14T12:50:59Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.