MGCT: Mutual-Guided Cross-Modality Transformer for Survival Outcome
Prediction using Integrative Histopathology-Genomic Features
- URL: http://arxiv.org/abs/2311.11659v1
- Date: Mon, 20 Nov 2023 10:49:32 GMT
- Title: MGCT: Mutual-Guided Cross-Modality Transformer for Survival Outcome
Prediction using Integrative Histopathology-Genomic Features
- Authors: Mingxin Liu, Yunzan Liu, Hui Cui, Chunquan Li, Jiquan Ma
- Abstract summary: Mutual-Guided Cross-Modality Transformer (MGCT) is a weakly-supervised, attention-based multimodal learning framework.
We propose MGCT to combine histology features and genomic features to model the genotype-phenotype interactions within the tumor microenvironment.
- Score: 2.3942863352287787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapidly emerging field of deep learning-based computational pathology has
shown promising results in utilizing whole slide images (WSIs) to objectively
prognosticate cancer patients. However, most prognostic methods are currently
limited to either histopathology or genomics alone, which inevitably reduces
their potential to accurately predict patient prognosis. Whereas integrating
WSIs and genomic features presents three main challenges: (1) the enormous
heterogeneity of gigapixel WSIs which can reach sizes as large as
150,000x150,000 pixels; (2) the absence of a spatially corresponding
relationship between histopathology images and genomic molecular data; and (3)
the existing early, late, and intermediate multimodal feature fusion strategies
struggle to capture the explicit interactions between WSIs and genomics. To
ameliorate these issues, we propose the Mutual-Guided Cross-Modality
Transformer (MGCT), a weakly-supervised, attention-based multimodal learning
framework that can combine histology features and genomic features to model the
genotype-phenotype interactions within the tumor microenvironment. To validate
the effectiveness of MGCT, we conduct experiments using nearly 3,600 gigapixel
WSIs across five different cancer types sourced from The Cancer Genome Atlas
(TCGA). Extensive experimental results consistently emphasize that MGCT
outperforms the state-of-the-art (SOTA) methods.
Related papers
- Multimodal Prototyping for cancer survival prediction [45.61869793509184]
Multimodal survival methods combining gigapixel histology whole-slide images (WSIs) and transcriptomic profiles are particularly promising for patient prognostication and stratification.
Current approaches involve tokenizing the WSIs into smaller patches (>10,000 patches) and transcriptomics into gene groups, which are then integrated using a Transformer for predicting outcomes.
This process generates many tokens, which leads to high memory requirements for computing attention and complicates post-hoc interpretability analyses.
Our framework outperforms state-of-the-art methods with much less computation while unlocking new interpretability analyses.
arXiv Detail & Related papers (2024-06-28T20:37:01Z) - Multimodal Cross-Task Interaction for Survival Analysis in Whole Slide Pathological Images [10.996711454572331]
Survival prediction, utilizing pathological images and genomic profiles, is increasingly important in cancer analysis and prognosis.
Existing multimodal methods often rely on alignment strategies to integrate complementary information.
We propose a Multimodal Cross-Task Interaction (MCTI) framework to explore the intrinsic correlations between subtype classification and survival analysis tasks.
arXiv Detail & Related papers (2024-06-25T02:18:35Z) - Pathology-genomic fusion via biologically informed cross-modality graph learning for survival analysis [7.996257103473235]
We propose Pathology-Genome Heterogeneous Graph (PGHG) that integrates whole slide images (WSI) and bulk RNA-Seq expression data with heterogeneous graph neural network for cancer survival analysis.
The PGHG consists of biological knowledge-guided representation learning network and pathology-genome heterogeneous graph.
We evaluate the model on low-grade gliomas, glioblastoma, and kidney renal papillary cell carcinoma datasets from the Cancer Genome Atlas.
arXiv Detail & Related papers (2024-04-11T09:07:40Z) - Histo-Genomic Knowledge Distillation For Cancer Prognosis From Histopathology Whole Slide Images [7.5123289730388825]
Genome-informed Hyper-Attention Network (G-HANet) is capable of effectively distilling histo-genomic knowledge during training.
Network comprises cross-modal associating branch (CAB) and hyper-attention survival branch (HSB)
arXiv Detail & Related papers (2024-03-15T06:20:09Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - Pathology-and-genomics Multimodal Transformer for Survival Outcome
Prediction [43.1748594898772]
We propose a multimodal transformer (PathOmics) integrating pathology and genomics insights into colon-related cancer survival prediction.
We emphasize the unsupervised pretraining to capture the intrinsic interaction between tissue microenvironments in gigapixel whole slide images.
We evaluate our approach on both TCGA colon and rectum cancer cohorts, showing that the proposed approach is competitive and outperforms state-of-the-art studies.
arXiv Detail & Related papers (2023-07-22T00:59:26Z) - Multimodal Optimal Transport-based Co-Attention Transformer with Global
Structure Consistency for Survival Prediction [5.445390550440809]
Survival prediction is a complicated ordinal regression task that aims to predict the ranking risk of death.
Due to the large size of pathological images, it is difficult to effectively represent the gigapixel whole slide images (WSIs)
Interactions within tumor microenvironment (TME) in histology are essential for survival analysis.
arXiv Detail & Related papers (2023-06-14T08:01:24Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Multi-Scale Hybrid Vision Transformer for Learning Gastric Histology:
AI-Based Decision Support System for Gastric Cancer Treatment [50.89811515036067]
Gastric endoscopic screening is an effective way to decide appropriate gastric cancer (GC) treatment at an early stage, reducing GC-associated mortality rate.
We propose a practical AI system that enables five subclassifications of GC pathology, which can be directly matched to general GC treatment guidance.
arXiv Detail & Related papers (2022-02-17T08:33:52Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.