Lung Nodule Classification Using Biomarkers, Volumetric Radiomics and 3D
CNNs
- URL: http://arxiv.org/abs/2010.11682v1
- Date: Mon, 19 Oct 2020 18:57:26 GMT
- Title: Lung Nodule Classification Using Biomarkers, Volumetric Radiomics and 3D
CNNs
- Authors: Kushal Mehta, Arshita Jain, Jayalakshmi Mangalagiri, Sumeet Menon,
Phuong Nguyen, David R. Chapman
- Abstract summary: We present a hybrid algorithm to estimate lung malignancy that combines imaging biomarkers from Radiologist's annotation with image classification of CT scans.
Our algorithm employs a 3D Convolutional Neural Network (CNN) as well as a Random Forest in order to combine CT imagery with biomarker annotation and radiomic features.
We show that a model using image biomarkers alone is more accurate than one that combines biomarkers with volumetric radiomics, 3D CNNs, and semi-supervised learning.
- Score: 0.0699049312989311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a hybrid algorithm to estimate lung nodule malignancy that
combines imaging biomarkers from Radiologist's annotation with image
classification of CT scans. Our algorithm employs a 3D Convolutional Neural
Network (CNN) as well as a Random Forest in order to combine CT imagery with
biomarker annotation and volumetric radiomic features. We analyze and compare
the performance of the algorithm using only imagery, only biomarkers, combined
imagery + biomarkers, combined imagery + volumetric radiomic features and
finally the combination of imagery + biomarkers + volumetric features in order
to classify the suspicion level of nodule malignancy. The National Cancer
Institute (NCI) Lung Image Database Consortium (LIDC) IDRI dataset is used to
train and evaluate the classification task. We show that the incorporation of
semi-supervised learning by means of K-Nearest-Neighbors (KNN) can increase the
available training sample size of the LIDC-IDRI thereby further improving the
accuracy of malignancy estimation of most of the models tested although there
is no significant improvement with the use of KNN semi-supervised learning if
image classification with CNNs and volumetric features are combined with
descriptive biomarkers. Unexpectedly, we also show that a model using image
biomarkers alone is more accurate than one that combines biomarkers with
volumetric radiomics, 3D CNNs, and semi-supervised learning. We discuss the
possibility that this result may be influenced by cognitive bias in LIDC-IDRI
because malignancy estimates were recorded by the same radiologist panel as
biomarkers, as well as future work to incorporate pathology information over a
subset of study participants.
Related papers
- Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Classification of lung cancer subtypes on CT images with synthetic
pathological priors [41.75054301525535]
Cross-scale associations exist in the image patterns between the same case's CT images and its pathological images.
We propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on CT images.
arXiv Detail & Related papers (2023-08-09T02:04:05Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Integration of Radiomics and Tumor Biomarkers in Interpretable Machine
Learning Models [0.0]
We propose the integration of expert-derived radiomics and DNN-predicted biomarkers in interpretable classifiers.
In our evaluation and practical application, the only input to ConRad is a segmented CT scan.
Overall, the proposed ConRad model combines CBM-derived biomarkers and radiomics features in an interpretable ML model which perform excellently for the lung malignancy classification.
arXiv Detail & Related papers (2023-03-20T15:00:52Z) - Noise-reducing attention cross fusion learning transformer for
histological image classification of osteosarcoma [2.8265965924600276]
This study aims to use artificial intelligence to classify osteosarcoma histological images and to assess tumor survival and necrosis.
We propose a typical transformer image classification framework by integrating noise reduction convolutional autoencoder and feature cross fusion learning.
Our method outperforms the traditional and deep learning approaches on various evaluation metrics, with an accuracy of 99.17% to support osteosarcoma diagnosis.
arXiv Detail & Related papers (2022-04-29T00:57:39Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography [20.665935997959025]
Interpretability in machine learning models is important in high-stakes decisions.
We present a framework for interpretable machine learning-based mammography.
arXiv Detail & Related papers (2021-03-23T05:00:21Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Robust Pancreatic Ductal Adenocarcinoma Segmentation with
Multi-Institutional Multi-Phase Partially-Annotated CT Scans [25.889684822655255]
Pancreatic ductal adenocarcinoma (PDAC) segmentation is one of the most challenging tumor segmentation tasks.
Based on a new self-learning framework, we propose to train the PDAC segmentation model using a much larger quantity of patients.
Experiment results show that our proposed method provides an absolute improvement of 6.3% Dice score over the strong baseline of nnUNet trained on annotated images.
arXiv Detail & Related papers (2020-08-24T18:50:30Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.