Comparisons of Graph Neural Networks on Cancer Classification Leveraging
a Joint of Phenotypic and Genetic Features
- URL: http://arxiv.org/abs/2101.05866v1
- Date: Thu, 14 Jan 2021 20:53:49 GMT
- Title: Comparisons of Graph Neural Networks on Cancer Classification Leveraging
a Joint of Phenotypic and Genetic Features
- Authors: David Oniani, Chen Wang, Yiqing Zhao, Andrew Wen, Hongfang Liu,
Feichen Shen
- Abstract summary: We evaluated variousgraph neural networks (GNNs) leveraging a joint of phenotypic and genetic features for cancer typeclassification.
Among GNNs, ChebNet, GraphSAGE, and TAGCN showed the best performance, while GATshowed the worst.
- Score: 7.381190270069632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cancer is responsible for millions of deaths worldwide every year. Although
significant progress hasbeen achieved in cancer medicine, many issues remain to
be addressed for improving cancer therapy.Appropriate cancer patient
stratification is the prerequisite for selecting appropriate treatment plan,
ascancer patients are of known heterogeneous genetic make-ups and phenotypic
differences. In thisstudy, built upon deep phenotypic characterizations
extractable from Mayo Clinic electronic healthrecords (EHRs) and genetic test
reports for a collection of cancer patients, we evaluated variousgraph neural
networks (GNNs) leveraging a joint of phenotypic and genetic features for
cancer typeclassification. Models were applied and fine-tuned on the Mayo
Clinic cancer disease dataset. Theassessment was done through the reported
accuracy, precision, recall, and F1 values as well as throughF1 scores based on
the disease class. Per our evaluation results, GNNs on average outperformed
thebaseline models with mean statistics always being higher that those of the
baseline models (0.849 vs0.772 for accuracy, 0.858 vs 0.794 for precision,
0.843 vs 0.759 for recall, and 0.843 vs 0.855 for F1score). Among GNNs,
ChebNet, GraphSAGE, and TAGCN showed the best performance, while GATshowed the
worst. We applied and compared eight GNN models including AGNN, ChebNet,
GAT,GCN, GIN, GraphSAGE, SGC, and TAGCN on the Mayo Clinic cancer disease
dataset and assessedtheir performance as well as compared them with each other
and with more conventional machinelearning models such as decision tree,
gradient boosting, multi-layer perceptron, naive bayes, andrandom forest which
we used as the baselines.
Related papers
- Advanced Hybrid Deep Learning Model for Enhanced Classification of Osteosarcoma Histopathology Images [0.0]
This study focuses on osteosarcoma (OS), the most common bone cancer in children and adolescents, which affects the long bones of the arms and legs.
We propose a novel hybrid model that combines convolutional neural networks (CNN) and vision transformers (ViT) to improve diagnostic accuracy for OS.
The model achieved an accuracy of 99.08%, precision of 99.10%, recall of 99.28%, and an F1-score of 99.23%.
arXiv Detail & Related papers (2024-10-29T13:54:08Z) - Medical-GAT: Cancer Document Classification Leveraging Graph-Based Residual Network for Scenarios with Limited Data [2.913761513290171]
We present a curated dataset of 1,874 biomedical abstracts, categorized into thyroid cancer, colon cancer, lung cancer, and generic topics.
Our research focuses on leveraging this dataset to improve classification performance, particularly in data-scarce scenarios.
We introduce a Residual Graph Attention Network (R-GAT) with multiple graph attention layers that capture the semantic information and structural relationships within cancer-related documents.
arXiv Detail & Related papers (2024-10-19T20:07:40Z) - Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Classification of Endoscopy and Video Capsule Images using CNN-Transformer Model [1.0994755279455526]
This study proposes a hybrid model that combines the advantages of Transformers and Convolutional Neural Networks (CNNs) to enhance classification performance.
For the GastroVision dataset, our proposed model demonstrates excellent performance with Precision, Recall, F1 score, Accuracy, and Matthews Correlation Coefficient (MCC) of 0.8320, 0.8386, 0.8324, 0.8386, and 0.8191, respectively.
arXiv Detail & Related papers (2024-08-20T11:05:32Z) - Using Pre-training and Interaction Modeling for ancestry-specific disease prediction in UK Biobank [69.90493129893112]
Recent genome-wide association studies (GWAS) have uncovered the genetic basis of complex traits, but show an under-representation of non-European descent individuals.
Here, we assess whether we can improve disease prediction across diverse ancestries using multiomic data.
arXiv Detail & Related papers (2024-04-26T16:39:50Z) - Survival Prediction Across Diverse Cancer Types Using Neural Networks [40.392772795903795]
Gastric cancer and Colon adenocarcinoma represent widespread and challenging malignancies.
Medical community has embraced the 5-year survival rate as a vital metric for estimating patient outcomes.
This study introduces a pioneering approach to enhance survival prediction models for gastric and Colon adenocarcinoma patients.
arXiv Detail & Related papers (2024-04-11T21:47:13Z) - Vision Transformer-Based Deep Learning for Histologic Classification of Endometrial Cancer [0.7228984887091693]
Endometrial cancer, the fourth most common cancer in females in the United States, with the lifetime risk for developing this disease is approximately 2.8% in women.
This study introduces EndoNet, which uses convolutional neural networks for extracting histologic features and classifying slides based on their visual characteristics into high- and low-grade.
The model was trained on 929 digitized hematoxylin and eosin-stained whole-slide images of endometrial cancer from hysterectomy cases at Dartmouth-Health.
arXiv Detail & Related papers (2023-12-13T19:38:50Z) - A Hybrid Machine Learning Model for Classifying Gene Mutations in Cancer using LSTM, BiLSTM, CNN, GRU, and GloVe [0.0]
We introduce a novel hybrid ensemble model that synergistically combines LSTM, BiLSTM, CNN, GRU, and GloVe embeddings for the classification of gene mutations in cancer.
Our approach achieved a training accuracy of 80.6%, precision of 81.6%, recall of 80.6%, and an F1 score of 83.1%, alongside a significantly reduced Mean Squared Error (MSE) of 2.596.
arXiv Detail & Related papers (2023-07-24T21:01:46Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Cancer Gene Profiling through Unsupervised Discovery [49.28556294619424]
We introduce a novel, automatic and unsupervised framework to discover low-dimensional gene biomarkers.
Our method is based on the LP-Stability algorithm, a high dimensional center-based unsupervised clustering algorithm.
Our signature reports promising results on distinguishing immune inflammatory and immune desert tumors.
arXiv Detail & Related papers (2021-02-11T09:04:45Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.