Incorporating intratumoral heterogeneity into weakly-supervised deep
learning models via variance pooling
- URL: http://arxiv.org/abs/2206.08885v1
- Date: Fri, 17 Jun 2022 16:35:35 GMT
- Title: Incorporating intratumoral heterogeneity into weakly-supervised deep
learning models via variance pooling
- Authors: Iain Carmichael, Andrew H. Song, Richard J. Chen, Drew F.K.
Williamson, Tiffany Y. Chen, Faisal Mahmood
- Abstract summary: Supervised learning tasks such as cancer survival prediction from gigapixel whole slide images (WSIs) are a critical challenge in computational pathology.
We develop a novel variance pooling architecture that enables a MIL model to incorporate intratumoral heterogeneity into its predictions.
An empirical study with 4,479 gigapixel WSIs from the Cancer Genome Atlas shows that adding variance pooling onto MIL frameworks improves survival prediction performance for five cancer types.
- Score: 5.606290756924216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Supervised learning tasks such as cancer survival prediction from gigapixel
whole slide images (WSIs) are a critical challenge in computational pathology
that requires modeling complex features of the tumor microenvironment. These
learning tasks are often solved with deep multi-instance learning (MIL) models
that do not explicitly capture intratumoral heterogeneity. We develop a novel
variance pooling architecture that enables a MIL model to incorporate
intratumoral heterogeneity into its predictions. Two interpretability tools
based on representative patches are illustrated to probe the biological signals
captured by these models. An empirical study with 4,479 gigapixel WSIs from the
Cancer Genome Atlas shows that adding variance pooling onto MIL frameworks
improves survival prediction performance for five cancer types.
Related papers
- Finding Regions of Interest in Whole Slide Images Using Multiple Instance Learning [0.23301643766310368]
Whole Slide Images (WSI) represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level.
We propose a weakly supervised Multiple Instance Learning (MIL) approach to accurately predict the overall cancer phenotype.
arXiv Detail & Related papers (2024-04-01T19:33:41Z) - MGCT: Mutual-Guided Cross-Modality Transformer for Survival Outcome
Prediction using Integrative Histopathology-Genomic Features [2.3942863352287787]
Mutual-Guided Cross-Modality Transformer (MGCT) is a weakly-supervised, attention-based multimodal learning framework.
We propose MGCT to combine histology features and genomic features to model the genotype-phenotype interactions within the tumor microenvironment.
arXiv Detail & Related papers (2023-11-20T10:49:32Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - SC-MIL: Supervised Contrastive Multiple Instance Learning for Imbalanced
Classification in Pathology [2.854576370929018]
Machine learning problems in medical imaging often deal with rare diseases.
In pathology images, there is another level of imbalance, where given a positively labeled Whole Slide Image (WSI), only a fraction of pixels within it contribute to the positive label.
We propose a joint-training MIL framework in the presence of label imbalance that progressively transitions from learning bag-level representations to optimal classifier learning.
arXiv Detail & Related papers (2023-03-23T16:28:15Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence
Classification [109.81283748940696]
We introduce several ways to perturb SARS-CoV-2 genome sequences to mimic the error profiles of common sequencing platforms such as Illumina and PacBio.
We show that some simulation-based approaches are more robust (and accurate) than others for specific embedding methods to certain adversarial attacks to the input sequences.
arXiv Detail & Related papers (2022-07-18T19:16:56Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - A robust and lightweight deep attention multiple instance learning
algorithm for predicting genetic alterations [4.674211520843232]
We propose a novel Attention-based Multiple Instance Mutation Learning (AMIML) model for predicting gene mutations.
AMIML was comprised of successive 1-D convolutional layers, a decoder, and a residual weight connection to facilitate further integration of a lightweight attention mechanism.
AMIML demonstrated excellent robustness, not only outperforming all the five baseline algorithms in the vast majority of the tested genes, but also providing near-best-performance for the other seven genes.
arXiv Detail & Related papers (2022-05-31T15:45:29Z) - Pan-Cancer Integrative Histology-Genomic Analysis via Interpretable
Multimodal Deep Learning [4.764927152701701]
We integrate whole slide pathology images, RNA-seq abundance, copy number variation, and mutation data from 5,720 patients across 14 major cancer types.
Our interpretable, weakly-supervised, multimodal deep learning algorithm is able to fuse these heterogeneous modalities for predicting outcomes.
We analyze morphologic and molecular markers responsible for prognostic predictions across all cancer types.
arXiv Detail & Related papers (2021-08-04T20:40:05Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.