A Graph Feature Auto-Encoder for the Prediction of Unobserved Node
Features on Biological Networks
- URL: http://arxiv.org/abs/2005.03961v2
- Date: Wed, 23 Dec 2020 09:18:25 GMT
- Title: A Graph Feature Auto-Encoder for the Prediction of Unobserved Node
Features on Biological Networks
- Authors: Ramin Hasibi, Tom Michoel
- Abstract summary: We studied the representation of biological interaction networks in E. Coli and mouse using graph neural networks.
We proposed a new end-to-end graph feature auto-encoder which is trained on the feature reconstruction task.
Our graph feature auto-encoder outperformed a state-of-the-art imputation method that does not use protein interaction information.
- Score: 3.132875765271743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivation: Molecular interaction networks summarize complex biological
processes as graphs, whose structure is informative of biological function at
multiple scales. Simultaneously, omics technologies measure the variation or
activity of genes, proteins, or metabolites across individuals or experimental
conditions. Integrating the complementary viewpoints of biological networks and
omics data is an important task in bioinformatics, but existing methods treat
networks as discrete structures, which are intrinsically difficult to integrate
with continuous node features or activity measures. Graph neural networks map
graph nodes into a low-dimensional vector space representation, and can be
trained to preserve both the local graph structure and the similarity between
node features.
Results: We studied the representation of transcriptional, protein-protein
and genetic interaction networks in E. Coli and mouse using graph neural
networks. We found that such representations explain a large proportion of
variation in gene expression data, and that using gene expression data as node
features improves the reconstruction of the graph from the embedding. We
further proposed a new end-to-end graph feature auto-encoder which is trained
on the feature reconstruction task, and showed that it performs better at
predicting unobserved node features than auto-encoders that are trained on the
graph reconstruction task before learning to predict node features. When
applied to the problem of imputing missing data in single-cell RNAseq data, our
graph feature auto-encoder outperformed a state-of-the-art imputation method
that does not use protein interaction information, showing the benefit of
integrating biological networks and omics data using graph representation
learning.
Related papers
- Learning From Graph-Structured Data: Addressing Design Issues and Exploring Practical Applications in Graph Representation Learning [2.492884361833709]
We present an exhaustive review of the latest advancements in graph representation learning and Graph Neural Networks (GNNs)
GNNs, tailored to handle graph-structured data, excel in deriving insights and predictions from intricate relational information.
Our work delves into the capabilities of GNNs, examining their foundational designs and their application in addressing real-world challenges.
arXiv Detail & Related papers (2024-11-09T19:10:33Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - A Systematic Review of Deep Graph Neural Networks: Challenges,
Classification, Architectures, Applications & Potential Utility in
Bioinformatics [0.0]
Graph neural networks (GNNs) employ message transmission between graph nodes to represent graph dependencies.
GNNs have the potential to be an excellent tool for solving a wide range of biological challenges in bioinformatics research.
arXiv Detail & Related papers (2023-11-03T10:25:47Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Predicting Biomedical Interactions with Probabilistic Model Selection
for Graph Neural Networks [5.156812030122437]
Current biological networks are noisy, sparse, and incomplete. Experimental identification of such interactions is both time-consuming and expensive.
Deep graph neural networks have shown their effectiveness in modeling graph-structured data and achieved good performance in biomedical interaction prediction.
Our proposed method enables the graph convolutional networks to dynamically adapt their depths to accommodate an increasing number of interactions.
arXiv Detail & Related papers (2022-11-22T20:44:28Z) - Simple and Efficient Heterogeneous Graph Neural Network [55.56564522532328]
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure.
This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN)
arXiv Detail & Related papers (2022-07-06T10:01:46Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Spiking Graph Convolutional Networks [19.36064180392385]
SpikingGCN is an end-to-end framework that aims to integrate the embedding of GCNs with the biofidelity characteristics of SNNs.
We show that SpikingGCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis.
arXiv Detail & Related papers (2022-05-05T16:44:36Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.