Deep Neural Networks integrating genomics and histopathological images
for predicting stages and survival time-to-event in colon cancer
- URL: http://arxiv.org/abs/2212.06834v1
- Date: Tue, 13 Dec 2022 16:12:45 GMT
- Title: Deep Neural Networks integrating genomics and histopathological images
for predicting stages and survival time-to-event in colon cancer
- Authors: Olalekan Ogundipe, Zeyneb Kurt, Wai Lok Woo
- Abstract summary: We build an ensemble deep neural network for colon cancer stages classification and samples stratification into low or high risk survival groups.
The results of our Ensemble Deep Conal Neural Network model show an improved performance in stages classification on the integrated dataset.
Among the 2548 fused features, 1695 features showed a statistically significant survival probability differences between the two risk groups defined by the extracted features.
- Score: 2.338833859812519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There exists unexplained diverse variation within the predefined colon cancer
stages using only features either from genomics or histopathological whole
slide images as prognostic factors. Unraveling this variation will bring about
improved in staging and treatment outcome, hence motivated by the advancement
of Deep Neural Network libraries and different structures and factors within
some genomic dataset, we aggregate atypical patterns in histopathological
images with diverse carcinogenic expression from mRNA, miRNA and DNA
Methylation as an integrative input source into an ensemble deep neural network
for colon cancer stages classification and samples stratification into low or
high risk survival groups. The results of our Ensemble Deep Convolutional
Neural Network model show an improved performance in stages classification on
the integrated dataset. The fused input features return Area under curve
Receiver Operating Characteristic curve (AUC ROC) of 0.95 compared with AUC ROC
of 0.71 and 0.68 obtained when only genomics and images features are used for
the stage's classification, respectively. Also, the extracted features were
used to split the patients into low or high risk survival groups. Among the
2548 fused features, 1695 features showed a statistically significant survival
probability differences between the two risk groups defined by the extracted
features.
Related papers
- Multimodal Prototyping for cancer survival prediction [45.61869793509184]
Multimodal survival methods combining gigapixel histology whole-slide images (WSIs) and transcriptomic profiles are particularly promising for patient prognostication and stratification.
Current approaches involve tokenizing the WSIs into smaller patches (>10,000 patches) and transcriptomics into gene groups, which are then integrated using a Transformer for predicting outcomes.
This process generates many tokens, which leads to high memory requirements for computing attention and complicates post-hoc interpretability analyses.
Our framework outperforms state-of-the-art methods with much less computation while unlocking new interpretability analyses.
arXiv Detail & Related papers (2024-06-28T20:37:01Z) - Pathology-genomic fusion via biologically informed cross-modality graph learning for survival analysis [7.996257103473235]
We propose Pathology-Genome Heterogeneous Graph (PGHG) that integrates whole slide images (WSI) and bulk RNA-Seq expression data with heterogeneous graph neural network for cancer survival analysis.
The PGHG consists of biological knowledge-guided representation learning network and pathology-genome heterogeneous graph.
We evaluate the model on low-grade gliomas, glioblastoma, and kidney renal papillary cell carcinoma datasets from the Cancer Genome Atlas.
arXiv Detail & Related papers (2024-04-11T09:07:40Z) - MM-SurvNet: Deep Learning-Based Survival Risk Stratification in Breast
Cancer Through Multimodal Data Fusion [18.395418853966266]
We propose a novel deep learning approach for breast cancer survival risk stratification.
We employ vision transformers, specifically the MaxViT model, for image feature extraction, and self-attention to capture intricate image relationships at the patient level.
A dual cross-attention mechanism fuses these features with genetic data, while clinical data is incorporated at the final layer to enhance predictive accuracy.
arXiv Detail & Related papers (2024-02-19T02:31:36Z) - BioFusionNet: Deep Learning-Based Survival Risk Stratification in ER+ Breast Cancer Through Multifeature and Multimodal Data Fusion [16.83901927767791]
We present BioFusionNet, a deep learning framework that fuses image-derived features with genetic and clinical data to obtain a holistic profile.
Our model achieves a mean concordance index of 0.77 and a time-dependent area under the curve of 0.84, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2024-02-16T14:19:33Z) - Single-Cell Deep Clustering Method Assisted by Exogenous Gene
Information: A Novel Approach to Identifying Cell Types [50.55583697209676]
We develop an attention-enhanced graph autoencoder, which is designed to efficiently capture the topological features between cells.
During the clustering process, we integrated both sets of information and reconstructed the features of both cells and genes to generate a discriminative representation.
This research offers enhanced insights into the characteristics and distribution of cells, thereby laying the groundwork for early diagnosis and treatment of diseases.
arXiv Detail & Related papers (2023-11-28T09:14:55Z) - Pathology-and-genomics Multimodal Transformer for Survival Outcome
Prediction [43.1748594898772]
We propose a multimodal transformer (PathOmics) integrating pathology and genomics insights into colon-related cancer survival prediction.
We emphasize the unsupervised pretraining to capture the intrinsic interaction between tissue microenvironments in gigapixel whole slide images.
We evaluate our approach on both TCGA colon and rectum cancer cohorts, showing that the proposed approach is competitive and outperforms state-of-the-art studies.
arXiv Detail & Related papers (2023-07-22T00:59:26Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - EPIC-Survival: End-to-end Part Inferred Clustering for Survival
Analysis, Featuring Prognostic Stratification Boosting [0.0]
EPIC-Survival bridges encoding and aggregation into an end-to-end survival modelling approach.
We show that EPIC-Survival performs better than other approaches in modelling intrahepatic cholangiocarcinoma.
arXiv Detail & Related papers (2021-01-26T21:11:45Z) - AMINN: Autoencoder-based Multiple Instance Neural Network for Outcome
Prediction of Multifocal Liver Metastases [1.7294318054149134]
Multifocality occurs frequently in colorectal cancer liver metastases.
Most existing biomarkers do not take the imaging features of all multifocal lesions into account.
We present an end-to-end autoencoder-based multiple instance neural network (AMINN) for the prediction of survival outcomes.
arXiv Detail & Related papers (2020-12-12T17:52:14Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.