Evaluating the Effectiveness of 2D and 3D Features for Predicting Tumor
Response to Chemotherapy
- URL: http://arxiv.org/abs/2303.16123v2
- Date: Fri, 14 Apr 2023 20:39:49 GMT
- Title: Evaluating the Effectiveness of 2D and 3D Features for Predicting Tumor
Response to Chemotherapy
- Authors: Neman Abdoli, Ke Zhang, Patrik Gilley, Xuxin Chen, Youkabed Sadri,
Theresa C. Thai, Lauren E. Dockery, Kathleen Moore, Robert S. Mannel, Yuchen
Qiu
- Abstract summary: 2D and 3D tumor features are widely used in a variety of medical image analysis tasks.
For chemotherapy response prediction, the effectiveness between different kinds of 2D and 3D features are not comprehensively assessed.
- Score: 0.9709939410473847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 2D and 3D tumor features are widely used in a variety of medical image
analysis tasks. However, for chemotherapy response prediction, the
effectiveness between different kinds of 2D and 3D features are not
comprehensively assessed, especially in ovarian cancer-related applications.
This investigation aims to accomplish such a comprehensive evaluation. For this
purpose, CT images were collected retrospectively from 188 advanced-stage
ovarian cancer patients. All the metastatic tumors that occurred in each
patient were segmented and then processed by a set of six filters. Next, three
categories of features, namely geometric, density, and texture features, were
calculated from both the filtered results and the original segmented tumors,
generating a total of 1595 and 1403 features for the 3D and 2D tumors,
respectively. In addition to the conventional single-slice 2D and full-volume
3D tumor features, we also computed the incomplete-3D tumor features, which
were achieved by sequentially adding one individual CT slice and calculating
the corresponding features. Support vector machine (SVM) based prediction
models were developed and optimized for each feature set. 5-fold
cross-validation was used to assess the performance of each individual model.
The results show that the 2D feature-based model achieved an AUC (area under
the ROC curve [receiver operating characteristic]) of 0.84+-0.02. When adding
more slices, the AUC first increased to reach the maximum and then gradually
decreased to 0.86+-0.02. The maximum AUC was yielded when adding two adjacent
slices, with a value of 0.91+-0.01. This initial result provides meaningful
information for optimizing machine learning-based decision-making support tools
in the future.
Related papers
- Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2 [1.6275928583134276]
We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis.
MST offers enhanced diagnostic accuracy and explainability compared to convolutional neural networks.
arXiv Detail & Related papers (2024-11-24T12:11:11Z) - Lumbar Spine Tumor Segmentation and Localization in T2 MRI Images Using AI [2.9746083684997418]
This study introduces a novel data augmentation technique, aimed at automating spine tumor segmentation and localization through AI approaches.
A Convolutional Neural Network (CNN) architecture is employed for tumor classification. 3D vertebral segmentation and labeling techniques are used to help pinpoint the exact location of the tumors in the lumbar spine.
Results indicate a remarkable performance, with 99% accuracy for tumor segmentation, 98% accuracy for tumor classification, and 99% accuracy for tumor localization achieved with the proposed approach.
arXiv Detail & Related papers (2024-05-07T05:55:50Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - propnet: Propagating 2D Annotation to 3D Segmentation for Gastric Tumors
on CT Scans [16.135854257728337]
This study introduces a model, utilizing human-guided knowledge and unique modules, to address the challenges of 3D tumor segmentation.
With 98 patient scans for training and 30 for validation, our method achieves a significant agreement with manual annotation (Dice of 0.803) and improves efficiency.
arXiv Detail & Related papers (2023-05-29T03:24:02Z) - Validated respiratory drug deposition predictions from 2D and 3D medical
images with statistical shape models and convolutional neural networks [47.187609203210705]
We aim to develop and validate an automated computational framework for patient-specific deposition modelling.
An image processing approach is proposed that could produce 3D patient respiratory geometries from 2D chest X-rays and 3D CT images.
arXiv Detail & Related papers (2023-03-02T07:47:07Z) - 2D and 3D CT Radiomic Features Performance Comparison in
Characterization of Gastric Cancer: A Multi-center Study [11.015650919856117]
We compared 2D and 3D radiomic features' representation and discrimination capacity regarding gastric cancer (GC)
Models constructed with 2D radiomic features revealed comparable performances with those constructed with 3D features in characterizing GC.
arXiv Detail & Related papers (2022-10-29T16:09:07Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Harvesting, Detecting, and Characterizing Liver Lesions from Large-scale
Multi-phase CT Data via Deep Dynamic Texture Learning [24.633802585888812]
We propose a fully-automated and multi-stage liver tumor characterization framework for dynamic contrast computed tomography (CT)
Our system comprises four sequential processes of tumor proposal detection, tumor harvesting, primary tumor site selection, and deep texture-based tumor characterization.
arXiv Detail & Related papers (2020-06-28T19:55:34Z) - A multicenter study on radiomic features from T$_2$-weighted images of a
customized MR pelvic phantom setting the basis for robust radiomic models in
clinics [47.187609203210705]
2D and 3D T$$-weighted images of a pelvic phantom were acquired on three scanners.
repeatability and repositioning of radiomic features were assessed.
arXiv Detail & Related papers (2020-05-14T09:24:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.