ISG: I can See Your Gene Expression
- URL: http://arxiv.org/abs/2210.16728v1
- Date: Sun, 30 Oct 2022 02:49:37 GMT
- Title: ISG: I can See Your Gene Expression
- Authors: Yan Yang and LiYuan Pan and Liu Liu and Eric A Stone
- Abstract summary: This paper aims to predict gene expression from a histology slide image precisely.
Such a slide image has a large resolution and sparsely distributed textures.
Existing gene expression methods mainly use general components to filter textureless regions, extract features, and aggregate features uniformly across regions.
We present ISG framework that harnesses interactions among discriminative features from texture-abundant regions by three new modules.
- Score: 13.148183268830879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to predict gene expression from a histology slide image
precisely. Such a slide image has a large resolution and sparsely distributed
textures. These obstruct extracting and interpreting discriminative features
from the slide image for diverse gene types prediction. Existing gene
expression methods mainly use general components to filter textureless regions,
extract features, and aggregate features uniformly across regions. However,
they ignore gaps and interactions between different image regions and are
therefore inferior in the gene expression task. Instead, we present ISG
framework that harnesses interactions among discriminative features from
texture-abundant regions by three new modules: 1) a Shannon Selection module,
based on the Shannon information content and Solomonoff's theory, to filter out
textureless image regions; 2) a Feature Extraction network to extract
expressive low-dimensional feature representations for efficient region
interactions among a high-resolution image; 3) a Dual Attention network attends
to regions with desired gene expression features and aggregates them for the
prediction task. Extensive experiments on standard benchmark datasets show that
the proposed ISG framework outperforms state-of-the-art methods significantly.
Related papers
- Spatial Transcriptomics Analysis of Spatially Dense Gene Expression Prediction [5.822764600388809]
PixNet is a dense prediction network capable of predicting spatially resolved gene expression across spots of varying sizes and scales directly from pathology images.
We generate a dense continuous gene expression map from the pathology image, and aggregate values within spots of interest to predict the gene expression.
arXiv Detail & Related papers (2025-03-03T09:38:01Z) - MERGE: Multi-faceted Hierarchical Graph-based GNN for Gene Expression Prediction from Whole Slide Histopathology Images [6.717786190771243]
We introduce MERGE (Multi-faceted hiErarchical gRaph for Gene Expressions), which combines a hierarchical graph construction strategy with graph neural networks (GNN) to improve gene expression predictions from whole slide images.
By clustering tissue image patches based on both spatial and morphological features, our approach fosters interactions between distant tissue locations during GNN learning.
As an additional contribution, we evaluate different data smoothing techniques that are necessary to mitigate artifacts in ST data, often caused by technical imperfections.
arXiv Detail & Related papers (2024-12-03T17:32:05Z) - High-Resolution Spatial Transcriptomics from Histology Images using HisToSGE [1.3124513975412255]
HisToSGE generates high-resolution gene expression profiles from histological images.
HisToSGE excels in generating high-resolution gene expression profiles and performing downstream tasks.
arXiv Detail & Related papers (2024-07-30T03:29:57Z) - Cross-modal Diffusion Modelling for Super-resolved Spatial Transcriptomics [5.020980014307814]
spatial transcriptomics allows to characterize spatial gene expression within tissue for discovery research.
Super-resolution approaches promise to enhance ST maps by integrating histology images with gene expressions of profiled tissue spots.
This paper proposes a cross-modal conditional diffusion model for super-resolving ST maps with the guidance of histology images.
arXiv Detail & Related papers (2024-04-19T16:01:00Z) - DCN-T: Dual Context Network with Transformer for Hyperspectral Image
Classification [109.09061514799413]
Hyperspectral image (HSI) classification is challenging due to spatial variability caused by complex imaging conditions.
We propose a tri-spectral image generation pipeline that transforms HSI into high-quality tri-spectral images.
Our proposed method outperforms state-of-the-art methods for HSI classification.
arXiv Detail & Related papers (2023-04-19T18:32:52Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Multiscale Analysis for Improving Texture Classification [62.226224120400026]
This paper employs the Gaussian-Laplacian pyramid to treat different spatial frequency bands of a texture separately.
We aggregate features extracted from gray and color texture images using bio-inspired texture descriptors, information-theoretic measures, gray-level co-occurrence matrix features, and Haralick statistical features into a single feature vector.
arXiv Detail & Related papers (2022-04-21T01:32:22Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - Video-based Facial Expression Recognition using Graph Convolutional
Networks [57.980827038988735]
We introduce a Graph Convolutional Network (GCN) layer into a common CNN-RNN based model for video-based facial expression recognition.
We evaluate our method on three widely-used datasets, CK+, Oulu-CASIA and MMI, and also one challenging wild dataset AFEW8.0.
arXiv Detail & Related papers (2020-10-26T07:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.