Predicting Biomedical Interactions with Probabilistic Model Selection
for Graph Neural Networks
- URL: http://arxiv.org/abs/2211.13231v1
- Date: Tue, 22 Nov 2022 20:44:28 GMT
- Title: Predicting Biomedical Interactions with Probabilistic Model Selection
for Graph Neural Networks
- Authors: Kishan K C, Rui Li, Paribesh Regmi, Anne R. Haake
- Abstract summary: Current biological networks are noisy, sparse, and incomplete. Experimental identification of such interactions is both time-consuming and expensive.
Deep graph neural networks have shown their effectiveness in modeling graph-structured data and achieved good performance in biomedical interaction prediction.
Our proposed method enables the graph convolutional networks to dynamically adapt their depths to accommodate an increasing number of interactions.
- Score: 5.156812030122437
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A biological system is a complex network of heterogeneous molecular entities
and their interactions contributing to various biological characteristics of
the system. However, current biological networks are noisy, sparse, and
incomplete, limiting our ability to create a holistic view of the biological
system and understand the biological phenomena. Experimental identification of
such interactions is both time-consuming and expensive. With the recent
advancements in high-throughput data generation and significant improvement in
computational power, various computational methods have been developed to
predict novel interactions in the noisy network. Recently, deep learning
methods such as graph neural networks have shown their effectiveness in
modeling graph-structured data and achieved good performance in biomedical
interaction prediction. However, graph neural networks-based methods require
human expertise and experimentation to design the appropriate complexity of the
model and significantly impact the performance of the model. Furthermore, deep
graph neural networks face overfitting problems and tend to be poorly
calibrated with high confidence on incorrect predictions. To address these
challenges, we propose Bayesian model selection for graph convolutional
networks to jointly infer the most plausible number of graph convolution layers
(depth) warranted by data and perform dropout regularization simultaneously.
Experiments on four interaction datasets show that our proposed method achieves
accurate and calibrated predictions. Our proposed method enables the graph
convolutional networks to dynamically adapt their depths to accommodate an
increasing number of interactions.
Related papers
- Deep Latent Variable Modeling of Physiological Signals [0.8702432681310401]
We explore high-dimensional problems related to physiological monitoring using latent variable models.
First, we present a novel deep state-space model to generate electrical waveforms of the heart using optically obtained signals as inputs.
Second, we present a brain signal modeling scheme that combines the strengths of probabilistic graphical models and deep adversarial learning.
Third, we propose a framework for the joint modeling of physiological measures and behavior.
arXiv Detail & Related papers (2024-05-29T17:07:33Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Impact of spiking neurons leakages and network recurrences on
event-based spatio-temporal pattern recognition [0.0]
Spiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge.
We explore the impact of synaptic and membrane leakages in spiking neurons.
arXiv Detail & Related papers (2022-11-14T21:34:02Z) - Physically constrained neural networks to solve the inverse problem for
neuron models [0.29005223064604074]
Systems biology and systems neurophysiology are powerful tools for a number of key applications in the biomedical sciences.
Recent developments in the field of deep neural networks have demonstrated the possibility of formulating nonlinear, universal approximators.
arXiv Detail & Related papers (2022-09-24T12:51:15Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z) - A Graph Feature Auto-Encoder for the Prediction of Unobserved Node
Features on Biological Networks [3.132875765271743]
We studied the representation of biological interaction networks in E. Coli and mouse using graph neural networks.
We proposed a new end-to-end graph feature auto-encoder which is trained on the feature reconstruction task.
Our graph feature auto-encoder outperformed a state-of-the-art imputation method that does not use protein interaction information.
arXiv Detail & Related papers (2020-05-08T11:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.