A slice classification neural network for automated classification of
axial PET/CT slices from a multi-centric lymphoma dataset
- URL: http://arxiv.org/abs/2403.07105v1
- Date: Mon, 11 Mar 2024 18:57:45 GMT
- Title: A slice classification neural network for automated classification of
axial PET/CT slices from a multi-centric lymphoma dataset
- Authors: Shadab Ahamed, Yixi Xu, Ingrid Bloise, Joo H. O, Carlos F. Uribe,
Rahul Dodhia, Juan L. Ferres, and Arman Rahmim
- Abstract summary: We train a ResNet-18 network to classify axial slices of lymphoma/CT images.
Model performances were compared using the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC)
We observe and describe a performance overestimation in the case of slice-level split as compared to the patient-level split training.
- Score: 1.0318017891096118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated slice classification is clinically relevant since it can be
incorporated into medical image segmentation workflows as a preprocessing step
that would flag slices with a higher probability of containing tumors, thereby
directing physicians attention to the important slices. In this work, we train
a ResNet-18 network to classify axial slices of lymphoma PET/CT images
(collected from two institutions) depending on whether the slice intercepted a
tumor (positive slice) in the 3D image or if the slice did not (negative
slice). Various instances of the network were trained on 2D axial datasets
created in different ways: (i) slice-level split and (ii) patient-level split;
inputs of different types were used: (i) only PET slices and (ii) concatenated
PET and CT slices; and different training strategies were employed: (i)
center-aware (CAW) and (ii) center-agnostic (CAG). Model performances were
compared using the area under the receiver operating characteristic curve
(AUROC) and the area under the precision-recall curve (AUPRC), and various
binary classification metrics. We observe and describe a performance
overestimation in the case of slice-level split as compared to the
patient-level split training. The model trained using patient-level split data
with the network input containing only PET slices in the CAG training regime
was the best performing/generalizing model on a majority of metrics. Our models
were additionally more closely compared using the sensitivity metric on the
positive slices from their respective test sets.
Related papers
- Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT [4.376648893167674]
The autoPET III Challenge focuses on advancing automated segmentation of tumor lesions in PET/CT images.
We developed a classifier that identifies the tracer of the given PET/CT based on the Maximum Intensity Projection of the PET scan.
Our final submission achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly available FDG and PSMA datasets.
arXiv Detail & Related papers (2024-09-18T17:16:57Z) - From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging [0.9384264274298444]
We present our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture.
Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets.
Compared to the default nnU-Net, which achieved a Dice score of 57.61, our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes.
arXiv Detail & Related papers (2024-09-14T16:39:17Z) - Attention-based CT Scan Interpolation for Lesion Segmentation of
Colorectal Liver Metastases [2.680862925538592]
Small liver lesions common to colorectal liver (CRLMs) are challenging for convolutional neural network (CNN) segmentation models.
We propose an unsupervised attention-based model to generate intermediate slices from consecutive triplet slices in CT scans.
Our model's outputs are consistent with the original input slices while increasing the segmentation performance in two cutting-edge 3D segmentation pipelines.
arXiv Detail & Related papers (2023-08-30T10:21:57Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Slice-by-slice deep learning aided oropharyngeal cancer segmentation
with adaptive thresholding for spatial uncertainty on FDG PET and CT images [0.0]
Tumor segmentation is a fundamental step for radiotherapy treatment planning.
This study proposes a novel automatic deep learning (DL) model to assist radiation oncologists in a slice-by-slice GTVp segmentation.
arXiv Detail & Related papers (2022-07-04T15:17:44Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-Scale Input Strategies for Medulloblastoma Tumor Classification
using Deep Transfer Learning [59.30734371401316]
Medulloblastoma is the most common malignant brain cancer among children.
CNN has shown promising results for MB subtype classification.
We study the impact of tile size and input strategy.
arXiv Detail & Related papers (2021-09-14T09:42:37Z) - Rethinking the Extraction and Interaction of Multi-Scale Features for
Vessel Segmentation [53.187152856583396]
We propose a novel deep learning model called PC-Net to segment retinal vessels and major arteries in 2D fundus image and 3D computed tomography angiography (CTA) scans.
In PC-Net, the pyramid squeeze-and-excitation (PSE) module introduces spatial information to each convolutional block, boosting its ability to extract more effective multi-scale features.
arXiv Detail & Related papers (2020-10-09T08:22:54Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - Fully-automated deep learning slice-based muscle estimation from CT
images for sarcopenia assessment [0.10499611180329801]
This retrospective study was conducted using a collection of public and privately available CT images.
The method consisted of two stages: slice detection from a CT volume and single-slice CT segmentation.
The output consisted of a segmented muscle mass on a CT slice at the level of L3 vertebra.
arXiv Detail & Related papers (2020-06-10T12:05:55Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.