2D and 3D CT Radiomic Features Performance Comparison in
Characterization of Gastric Cancer: A Multi-center Study
- URL: http://arxiv.org/abs/2210.16640v1
- Date: Sat, 29 Oct 2022 16:09:07 GMT
- Title: 2D and 3D CT Radiomic Features Performance Comparison in
Characterization of Gastric Cancer: A Multi-center Study
- Authors: Lingwei Meng, Di Dong, Xin Chen, Mengjie Fang, Rongpin Wang, Jing Li,
Zaiyi Liu, Jie Tian
- Abstract summary: We compared 2D and 3D radiomic features' representation and discrimination capacity regarding gastric cancer (GC)
Models constructed with 2D radiomic features revealed comparable performances with those constructed with 3D features in characterizing GC.
- Score: 11.015650919856117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: Radiomics, an emerging tool for medical image analysis, is
potential towards precisely characterizing gastric cancer (GC). Whether using
one-slice 2D annotation or whole-volume 3D annotation remains a long-time
debate, especially for heterogeneous GC. We comprehensively compared 2D and 3D
radiomic features' representation and discrimination capacity regarding GC, via
three tasks.
Methods: Four-center 539 GC patients were retrospectively enrolled and
divided into the training and validation cohorts. From 2D or 3D regions of
interest (ROIs) annotated by radiologists, radiomic features were extracted
respectively. Feature selection and model construction procedures were customed
for each combination of two modalities (2D or 3D) and three tasks.
Subsequently, six machine learning models (Model_2D^LNM, Model_3D^LNM;
Model_2D^LVI, Model_3D^LVI; Model_2D^pT, Model_3D^pT) were derived and
evaluated to reflect modalities' performances in characterizing GC.
Furthermore, we performed an auxiliary experiment to assess modalities'
performances when resampling spacing is different.
Results: Regarding three tasks, the yielded areas under the curve (AUCs)
were: Model_2D^LNM's 0.712 (95% confidence interval, 0.613-0.811),
Model_3D^LNM's 0.680 (0.584-0.775); Model_2D^LVI's 0.677 (0.595-0.761),
Model_3D^LVI's 0.615 (0.528-0.703); Model_2D^pT's 0.840 (0.779-0.901),
Model_3D^pT's 0.813 (0.747-0.879). Moreover, the auxiliary experiment indicated
that Models_2D are statistically more advantageous than Models3D with different
resampling spacings.
Conclusion: Models constructed with 2D radiomic features revealed comparable
performances with those constructed with 3D features in characterizing GC.
Significance: Our work indicated that time-saving 2D annotation would be the
better choice in GC, and provided a related reference to further
radiomics-based researches.
Related papers
- DSplats: 3D Generation by Denoising Splats-Based Multiview Diffusion Models [67.50989119438508]
We introduce DSplats, a novel method that directly denoises multiview images using Gaussian-based Reconstructors to produce realistic 3D assets.
Our experiments demonstrate that DSplats not only produces high-quality, spatially consistent outputs, but also sets a new standard in single-image to 3D reconstruction.
arXiv Detail & Related papers (2024-12-11T07:32:17Z) - Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2 [1.6275928583134276]
We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis.
MST offers enhanced diagnostic accuracy and explainability compared to convolutional neural networks.
arXiv Detail & Related papers (2024-11-24T12:11:11Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - 2D and 3D Deep Learning Models for MRI-based Parkinson's Disease Classification: A Comparative Analysis of Convolutional Kolmogorov-Arnold Networks, Convolutional Neural Networks, and Graph Convolutional Networks [0.0]
This study applies Convolutional Kolmogorov-Arnold Networks (ConvKANs) to Parkinson's Disease diagnosis.
ConvKANs integrate learnable activation functions into convolutional layers, for PD classification using structural MRI.
The first 3D implementation of ConvKANs for medical imaging is presented, comparing their performance to Convolutional Neural Networks (CNNs) and Graph Convolutional Networks (GCNs)
These findings highlight ConvKANs' potential for PD detection, emphasize the importance of 3D analysis in capturing subtle brain changes, and underscore cross-dataset generalization challenges.
arXiv Detail & Related papers (2024-07-24T16:04:18Z) - propnet: Propagating 2D Annotation to 3D Segmentation for Gastric Tumors
on CT Scans [16.135854257728337]
This study introduces a model, utilizing human-guided knowledge and unique modules, to address the challenges of 3D tumor segmentation.
With 98 patient scans for training and 30 for validation, our method achieves a significant agreement with manual annotation (Dice of 0.803) and improves efficiency.
arXiv Detail & Related papers (2023-05-29T03:24:02Z) - Evaluating the Effectiveness of 2D and 3D Features for Predicting Tumor
Response to Chemotherapy [0.9709939410473847]
2D and 3D tumor features are widely used in a variety of medical image analysis tasks.
For chemotherapy response prediction, the effectiveness between different kinds of 2D and 3D features are not comprehensively assessed.
arXiv Detail & Related papers (2023-03-28T16:44:43Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - Classification of Brain Tumours in MR Images using Deep Spatiospatial
Models [0.0]
This paper uses twotemporal models, ResNet (2+1)D and ResNet Mixed Convolution, to classify different types of brain tumours.
It was observed that both these models performed superior to the pure 3D convolutional model, ResNet18.
arXiv Detail & Related papers (2021-05-28T19:27:51Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.