Spatially Resolved Gene Expression Prediction from H&E Histology Images
via Bi-modal Contrastive Learning
- URL: http://arxiv.org/abs/2306.01859v2
- Date: Fri, 27 Oct 2023 13:54:32 GMT
- Title: Spatially Resolved Gene Expression Prediction from H&E Histology Images
via Bi-modal Contrastive Learning
- Authors: Ronald Xie, Kuan Pang, Sai W. Chung, Catia T. Perciani, Sonya A.
MacParland, Bo Wang, Gary D. Bader
- Abstract summary: We present BLEEP (Bi-modaL Embedding for Expression Prediction), a bi-modal embedding framework capable of generating spatially resolved gene expression profiles.
BLEEP uses contrastive learning to construct a low-dimensional joint embedding space from a reference dataset using paired image and expression profiles at micrometer resolution.
We demonstrate BLEEP's effectiveness in gene expression prediction by benchmarking its performance on a human liver tissue dataset captured using the 10x Visium platform.
- Score: 4.067498002241427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Histology imaging is an important tool in medical diagnosis and research,
enabling the examination of tissue structure and composition at the microscopic
level. Understanding the underlying molecular mechanisms of tissue architecture
is critical in uncovering disease mechanisms and developing effective
treatments. Gene expression profiling provides insight into the molecular
processes underlying tissue architecture, but the process can be time-consuming
and expensive. We present BLEEP (Bi-modaL Embedding for Expression Prediction),
a bi-modal embedding framework capable of generating spatially resolved gene
expression profiles of whole-slide Hematoxylin and eosin (H&E) stained
histology images. BLEEP uses contrastive learning to construct a
low-dimensional joint embedding space from a reference dataset using paired
image and expression profiles at micrometer resolution. With this approach, the
gene expression of any query image patch can be imputed using the expression
profiles from the reference dataset. We demonstrate BLEEP's effectiveness in
gene expression prediction by benchmarking its performance on a human liver
tissue dataset captured using the 10x Visium platform, where it achieves
significant improvements over existing methods. Our results demonstrate the
potential of BLEEP to provide insights into the molecular mechanisms underlying
tissue architecture, with important implications in diagnosis and research of
various diseases. The proposed approach can significantly reduce the time and
cost associated with gene expression profiling, opening up new avenues for
high-throughput analysis of histology images for both research and clinical
applications.
Related papers
- RankByGene: Gene-Guided Histopathology Representation Learning Through Cross-Modal Ranking Consistency [11.813883157319381]
We propose a novel framework that aligns gene and image features using a ranking-based alignment loss.
To further enhance the alignment's stability, we employ self-supervised knowledge distillation with a teacher-student network architecture.
arXiv Detail & Related papers (2024-11-22T17:08:28Z) - Multiplex Imaging Analysis in Pathology: a Comprehensive Review on Analytical Approaches and Digital Toolkits [0.7968706282619793]
Multi multiplexed imaging allows for simultaneous visualization of multiple biomarkers in a single section.
Data from multiplexed imaging requires sophisticated computational methods for preprocessing, segmentation, feature extraction, and spatial analysis.
PathML is an AI-powered platform that streamlines image analysis, making complex interpretation accessible for clinical and research settings.
arXiv Detail & Related papers (2024-11-01T18:02:41Z) - Spatially Resolved Gene Expression Prediction from Histology via Multi-view Graph Contrastive Learning with HSIC-bottleneck Regularization [18.554968935341236]
We propose a Multi-view Graph Contrastive Learning framework with HSIC-bottleneck Regularization(ST-GCHB) to help impute the gene expression of the queried imagingspots by considering their spatial dependency.
arXiv Detail & Related papers (2024-06-18T03:07:25Z) - Morphological Profiling for Drug Discovery in the Era of Deep Learning [13.307277432389496]
We provide a comprehensive overview of the recent advances in the field of morphological profiling.
We place a particular emphasis on the application of deep learning in this pipeline.
arXiv Detail & Related papers (2023-12-13T05:08:32Z) - Single-Cell Deep Clustering Method Assisted by Exogenous Gene
Information: A Novel Approach to Identifying Cell Types [50.55583697209676]
We develop an attention-enhanced graph autoencoder, which is designed to efficiently capture the topological features between cells.
During the clustering process, we integrated both sets of information and reconstructed the features of both cells and genes to generate a discriminative representation.
This research offers enhanced insights into the characteristics and distribution of cells, thereby laying the groundwork for early diagnosis and treatment of diseases.
arXiv Detail & Related papers (2023-11-28T09:14:55Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - SEPAL: Spatial Gene Expression Prediction from Local Graphs [1.4523812806185954]
We present SEPAL, a new model for predicting genetic profiles from visual tissue appearance.
Our method exploits the biological biases of the problem by directly supervising relative differences with respect to mean expression.
We propose a novel benchmark that aims to better define the task by following current best practices in transcriptomics.
arXiv Detail & Related papers (2023-09-02T23:24:02Z) - Unsupervised ensemble-based phenotyping helps enhance the
discoverability of genes related to heart morphology [57.25098075813054]
We propose a new framework for gene discovery entitled Un Phenotype Ensembles.
It builds a redundant yet highly expressive representation by pooling a set of phenotypes learned in an unsupervised manner.
These phenotypes are then analyzed via (GWAS), retaining only highly confident and stable associations.
arXiv Detail & Related papers (2023-01-07T18:36:44Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.