SEPAL: Spatial Gene Expression Prediction from Local Graphs
- URL: http://arxiv.org/abs/2309.01036v3
- Date: Wed, 10 Jan 2024 22:30:29 GMT
- Title: SEPAL: Spatial Gene Expression Prediction from Local Graphs
- Authors: Gabriel Mejia, Paula C\'ardenas, Daniela Ruiz, Angela Castillo, Pablo
Arbel\'aez
- Abstract summary: We present SEPAL, a new model for predicting genetic profiles from visual tissue appearance.
Our method exploits the biological biases of the problem by directly supervising relative differences with respect to mean expression.
We propose a novel benchmark that aims to better define the task by following current best practices in transcriptomics.
- Score: 1.4523812806185954
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Spatial transcriptomics is an emerging technology that aligns histopathology
images with spatially resolved gene expression profiling. It holds the
potential for understanding many diseases but faces significant bottlenecks
such as specialized equipment and domain expertise. In this work, we present
SEPAL, a new model for predicting genetic profiles from visual tissue
appearance. Our method exploits the biological biases of the problem by
directly supervising relative differences with respect to mean expression, and
leverages local visual context at every coordinate to make predictions using a
graph neural network. This approach closes the gap between complete locality
and complete globality in current methods. In addition, we propose a novel
benchmark that aims to better define the task by following current best
practices in transcriptomics and restricting the prediction variables to only
those with clear spatial patterns. Our extensive evaluation in two different
human breast cancer datasets indicates that SEPAL outperforms previous
state-of-the-art methods and other mechanisms of including spatial context.
Related papers
- SpaRED benchmark: Enhancing Gene Expression Prediction from Histology Images with Spatial Transcriptomics Completion [2.032350440475489]
We present a systematically curated and processed database collected from 26 public sources.
We also propose a state-of-the-art transformer based completion technique for inferring missing gene expression.
Our contributions constitute the most comprehensive benchmark of gene expression prediction from histology images to date.
arXiv Detail & Related papers (2024-07-17T21:28:20Z) - Multimodal contrastive learning for spatial gene expression prediction using histology images [13.47034080678041]
We propose textbfmclSTExp, a multimodal contrastive learning with Transformer and Densenet-121 encoder for Spatial Transcriptomics Expression prediction.
textbfmclSTExp has superior performance in predicting spatial gene expression.
It has shown promise in interpreting cancer-specific overexpressed genes, elucidating immune-related genes, and identifying specialized spatial domains annotated by pathologists.
arXiv Detail & Related papers (2024-07-11T06:33:38Z) - Spatially Resolved Gene Expression Prediction from Histology via Multi-view Graph Contrastive Learning with HSIC-bottleneck Regularization [18.554968935341236]
We propose a Multi-view Graph Contrastive Learning framework with HSIC-bottleneck Regularization(ST-GCHB) to help impute the gene expression of the queried imagingspots by considering their spatial dependency.
arXiv Detail & Related papers (2024-06-18T03:07:25Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - Spatially Resolved Gene Expression Prediction from H&E Histology Images
via Bi-modal Contrastive Learning [4.067498002241427]
We present BLEEP (Bi-modaL Embedding for Expression Prediction), a bi-modal embedding framework capable of generating spatially resolved gene expression profiles.
BLEEP uses contrastive learning to construct a low-dimensional joint embedding space from a reference dataset using paired image and expression profiles at micrometer resolution.
We demonstrate BLEEP's effectiveness in gene expression prediction by benchmarking its performance on a human liver tissue dataset captured using the 10x Visium platform.
arXiv Detail & Related papers (2023-06-02T18:27:26Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Spatial machine-learning model diagnostics: a model-agnostic
distance-based approach [91.62936410696409]
This contribution proposes spatial prediction error profiles (SPEPs) and spatial variable importance profiles (SVIPs) as novel model-agnostic assessment and interpretation tools.
The SPEPs and SVIPs of geostatistical methods, linear models, random forest, and hybrid algorithms show striking differences and also relevant similarities.
The novel diagnostic tools enrich the toolkit of spatial data science, and may improve ML model interpretation, selection, and design.
arXiv Detail & Related papers (2021-11-13T01:50:36Z) - All You Need is Color: Image based Spatial Gene Expression Prediction
using Neural Stain Learning [11.9045433112067]
We propose a "stain-aware" machine learning approach for prediction of spatial transcriptomic gene expression profiles.
We have found that the gene expression predictions from the proposed approach show higher correlations with true expression values obtained through sequencing.
arXiv Detail & Related papers (2021-08-23T23:43:38Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Adversarial Graph Representation Adaptation for Cross-Domain Facial
Expression Recognition [86.25926461936412]
We propose a novel Adrialversa Graph Representation Adaptation (AGRA) framework that unifies graph representation propagation with adversarial learning for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair experiments on several popular benchmarks and show that the proposed AGRA framework achieves superior performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T13:27:24Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.