Improved automated lesion segmentation in whole-body FDG/PET-CT via
Test-Time Augmentation
- URL: http://arxiv.org/abs/2210.07761v1
- Date: Fri, 14 Oct 2022 12:50:59 GMT
- Title: Improved automated lesion segmentation in whole-body FDG/PET-CT via
Test-Time Augmentation
- Authors: Sepideh Amiri, Bulat Ibragimov
- Abstract summary: Oncology indications have extensively quantified metabolically active tumors using positron emission tomography (PET) and computed tomography (CT)
In this study, we investigate the potential benefits of test-time augmentation for segmenting tumors from PET-CT pairings.
We train U-Net and Swin U-Netr on the training database to determine how different test time augmentation improved segmentation performance.
- Score: 5.206955554317389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous oncology indications have extensively quantified metabolically
active tumors using positron emission tomography (PET) and computed tomography
(CT). F-fluorodeoxyglucose-positron emission tomography (FDG-PET) is frequently
utilized in clinical practice and clinical drug research to detect and measure
metabolically active malignancies. The assessment of tumor burden using manual
or computer-assisted tumor segmentation in FDG-PET images is widespread. Deep
learning algorithms have also produced effective solutions in this area.
However, there may be a need to improve the performance of a pre-trained deep
learning network without the opportunity to modify this network. We investigate
the potential benefits of test-time augmentation for segmenting tumors from
PET-CT pairings. We applied a new framework of multilevel and multimodal tumor
segmentation techniques that can simultaneously consider PET and CT data. In
this study, we improve the network using a learnable composition of test time
augmentations. We trained U-Net and Swin U-Netr on the training database to
determine how different test time augmentation improved segmentation
performance. We also developed an algorithm that finds an optimal test time
augmentation contribution coefficient set. Using the newly trained U-Net and
Swin U-Netr results, we defined an optimal set of coefficients for test-time
augmentation and utilized them in combination with a pre-trained fixed nnU-Net.
The ultimate idea is to improve performance at the time of testing when the
model is fixed. Averaging the predictions with varying ratios on the augmented
data can improve prediction accuracy. Our code will be available at
\url{https://github.com/sepidehamiri/pet\_seg\_unet}
Related papers
- From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging [0.9384264274298444]
We present our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture.
Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets.
Compared to the default nnU-Net, which achieved a Dice score of 57.61, our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes.
arXiv Detail & Related papers (2024-09-14T16:39:17Z) - AutoPET Challenge: Tumour Synthesis for Data Augmentation [26.236831356731017]
We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions.
Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data.
Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach.
arXiv Detail & Related papers (2024-09-12T14:23:19Z) - Segmentation of Prostate Tumour Volumes from PET Images is a Different Ball Game [6.038532253968018]
Existing methods fail to accurately consider the intensity-based scaling applied by the physicians during manual annotation of tumour contours.
We implement a new custom-feature-clipping normalisation technique.
Our results show that the U-Net models achieve much better performance when the PET scans are preprocessed with our novel clipping technique.
arXiv Detail & Related papers (2024-07-15T08:48:17Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - ISA-Net: Improved spatial attention network for PET-CT tumor
segmentation [22.48294544919023]
We propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT)
We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors.
We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset.
arXiv Detail & Related papers (2022-11-04T04:15:13Z) - Exploring Vanilla U-Net for Lesion Segmentation from Whole-body
FDG-PET/CT Scans [16.93163630413171]
Since FDG-PET scans only provide metabolic information, healthy tissue or benign disease with irregular glucose consumption may be mistaken for cancer.
In this paper, we explore the potential of U-Net for lesion segmentation in whole-body FDG-PET/CT scans from three aspects, including network architecture, data preprocessing, and data augmentation.
Our method achieves first place in both preliminary and final leaderboards of the autoPET 2022 challenge.
arXiv Detail & Related papers (2022-10-14T03:37:18Z) - DLTTA: Dynamic Learning Rate for Test-time Adaptation on Cross-domain
Medical Images [56.72015587067494]
We propose a novel dynamic learning rate adjustment method for test-time adaptation, called DLTTA.
Our method achieves effective and fast test-time adaptation with consistent performance improvement over current state-of-the-art test-time adaptation methods.
arXiv Detail & Related papers (2022-05-27T02:34:32Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.