Exemplar Guided Deep Neural Network for Spatial Transcriptomics Analysis
of Gene Expression Prediction
- URL: http://arxiv.org/abs/2210.16721v1
- Date: Sun, 30 Oct 2022 02:22:20 GMT
- Title: Exemplar Guided Deep Neural Network for Spatial Transcriptomics Analysis
of Gene Expression Prediction
- Authors: Yan Yang and Md Zakir Hossain and Eric A Stone and Shafin Rahman
- Abstract summary: This paper proposes an Exemplar Guided Network (EGN) to accurately and efficiently predict gene expression directly from each window of a tissue slide image.
Our EGN framework composes of three main components: 1) an extractor to structure a representation space for unsupervised retrievals; 2) a vision transformer (ViT) backbone to progressively extract representations of the input window; and 3) an Exemplar Bridging (EB) block to adaptively revise the intermediate ViT representations by using the nearest exemplars.
- Score: 9.192169460752805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatial transcriptomics (ST) is essential for understanding diseases and
developing novel treatments. It measures gene expression of each fine-grained
area (i.e., different windows) in the tissue slide with low throughput. This
paper proposes an Exemplar Guided Network (EGN) to accurately and efficiently
predict gene expression directly from each window of a tissue slide image. We
apply exemplar learning to dynamically boost gene expression prediction from
nearest/similar exemplars of a given tissue slide image window. Our EGN
framework composes of three main components: 1) an extractor to structure a
representation space for unsupervised exemplar retrievals; 2) a vision
transformer (ViT) backbone to progressively extract representations of the
input window; and 3) an Exemplar Bridging (EB) block to adaptively revise the
intermediate ViT representations by using the nearest exemplars. Finally, we
complete the gene expression prediction task with a simple attention-based
prediction block. Experiments on standard benchmark datasets indicate the
superiority of our approach when comparing with the past state-of-the-art
(SOTA) methods.
Related papers
- Boundary-Guided Learning for Gene Expression Prediction in Spatial Transcriptomics [7.763803040383128]
We propose a framework named BG-TRIPLEX, which leverages boundary information extracted from pathological images as guiding features to enhance gene expression prediction.
Our framework consistently outperforms existing methods in terms of Pearson Correlation Coefficient (PCC)
This method highlights the crucial role of boundary features in understanding the complex interactions between WSI and gene expression.
arXiv Detail & Related papers (2024-12-05T11:09:11Z) - MERGE: Multi-faceted Hierarchical Graph-based GNN for Gene Expression Prediction from Whole Slide Histopathology Images [6.717786190771243]
We introduce MERGE (Multi-faceted hiErarchical gRaph for Gene Expressions), which combines a hierarchical graph construction strategy with graph neural networks (GNN) to improve gene expression predictions from whole slide images.
By clustering tissue image patches based on both spatial and morphological features, our approach fosters interactions between distant tissue locations during GNN learning.
As an additional contribution, we evaluate different data smoothing techniques that are necessary to mitigate artifacts in ST data, often caused by technical imperfections.
arXiv Detail & Related papers (2024-12-03T17:32:05Z) - SpaRED benchmark: Enhancing Gene Expression Prediction from Histology Images with Spatial Transcriptomics Completion [2.032350440475489]
We present a systematically curated and processed database collected from 26 public sources.
We also propose a state-of-the-art transformer based completion technique for inferring missing gene expression.
Our contributions constitute the most comprehensive benchmark of gene expression prediction from histology images to date.
arXiv Detail & Related papers (2024-07-17T21:28:20Z) - S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - Spatial Transcriptomics Analysis of Zero-shot Gene Expression Prediction [7.8979634764500455]
We propose a pioneering zero-shot framework for predicting gene expression from slide image windows.
Considering a gene type can be described by functionality and phenotype, we dynamically embed a gene type to a vector.
We employ this vector to project slide image windows to gene expression in feature space, unleashing zero-shot expression prediction for unseen gene types.
arXiv Detail & Related papers (2024-01-26T10:53:21Z) - Forgery-aware Adaptive Transformer for Generalizable Synthetic Image
Detection [106.39544368711427]
We study the problem of generalizable synthetic image detection, aiming to detect forgery images from diverse generative methods.
We present a novel forgery-aware adaptive transformer approach, namely FatFormer.
Our approach tuned on 4-class ProGAN data attains an average of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen diffusion models with 95% accuracy.
arXiv Detail & Related papers (2023-12-27T17:36:32Z) - Leveraging Graph Diffusion Models for Network Refinement Tasks [72.54590628084178]
We propose a novel graph generative framework, SGDM, based on subgraph diffusion.
Our framework not only improves the scalability and fidelity of graph diffusion models, but also leverages the reverse process to perform novel, conditional generation tasks.
arXiv Detail & Related papers (2023-11-29T18:02:29Z) - SEPAL: Spatial Gene Expression Prediction from Local Graphs [1.4523812806185954]
We present SEPAL, a new model for predicting genetic profiles from visual tissue appearance.
Our method exploits the biological biases of the problem by directly supervising relative differences with respect to mean expression.
We propose a novel benchmark that aims to better define the task by following current best practices in transcriptomics.
arXiv Detail & Related papers (2023-09-02T23:24:02Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text
Extractive Summarization [57.798070356553936]
HETFORMER is a Transformer-based pre-trained model with multi-granularity sparse attentions for extractive summarization.
Experiments on both single- and multi-document summarization tasks show that HETFORMER achieves state-of-the-art performance in Rouge F1.
arXiv Detail & Related papers (2021-10-12T22:42:31Z) - Select, Extract and Generate: Neural Keyphrase Generation with
Layer-wise Coverage Attention [75.44523978180317]
We propose emphSEG-Net, a neural keyphrase generation model that is composed of two major components.
The experimental results on seven keyphrase generation benchmarks from scientific and web documents demonstrate that SEG-Net outperforms the state-of-the-art neural generative methods by a large margin.
arXiv Detail & Related papers (2020-08-04T18:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.