Decoding Neuronal Networks: A Reservoir Computing Approach for
Predicting Connectivity and Functionality
- URL: http://arxiv.org/abs/2311.03131v3
- Date: Tue, 5 Mar 2024 10:25:03 GMT
- Title: Decoding Neuronal Networks: A Reservoir Computing Approach for
Predicting Connectivity and Functionality
- Authors: Ilya Auslender, Giorgio Letti, Yasaman Heydari, Clara Zaccaria,
Lorenzo Pavesi
- Abstract summary: Our model deciphers data obtained from electrophysiological measurements of neuronal cultures.
Notably, our model outperforms common methods like Cross-Correlation and Transfer-Entropy in predicting the network's connectivity map.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this study, we address the challenge of analyzing electrophysiological
measurements in neuronal networks. Our computational model, based on the
Reservoir Computing Network (RCN) architecture, deciphers spatio-temporal data
obtained from electrophysiological measurements of neuronal cultures. By
reconstructing the network structure on a macroscopic scale, we reveal the
connectivity between neuronal units. Notably, our model outperforms common
methods like Cross-Correlation and Transfer-Entropy in predicting the network's
connectivity map. Furthermore, we experimentally validate its ability to
forecast network responses to specific inputs, including localized optogenetic
stimuli.
Related papers
- Statistical tuning of artificial neural network [0.0]
This study introduces methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer.
We propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction.
This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks.
arXiv Detail & Related papers (2024-09-24T19:47:03Z) - Reusability report: Prostate cancer stratification with diverse
biologically-informed neural architectures [7.417447233454902]
A feedforward neural network with biologically informed, sparse connections (P-NET) was presented to model the state of prostate cancer.
We quantified the contribution of network sparsification by Reactome biological pathways, and confirmed its importance to P-NET's superior performance.
We experimented with three types of graph neural networks on the same training data, and investigated the clinical prediction agreement between different models.
arXiv Detail & Related papers (2023-09-28T17:51:02Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Approximate Bisimulation Relations for Neural Networks and Application
to Assured Neural Network Compression [3.0839245814393728]
We propose a concept of approximate bisimulation relation for feedforward neural networks.
A novel neural network merging method is developed to compute the approximate bisimulation error between two neural networks.
arXiv Detail & Related papers (2022-02-02T16:21:19Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Modeling Spatio-Temporal Dynamics in Brain Networks: A Comparison of
Graph Neural Network Architectures [0.5033155053523041]
Graph neural networks (GNNs) provide a possibility to interpret new structured graph signals.
We show that by learning localized functional interactions on the substrate, GNN based approaches are able to robustly scale to large network studies.
arXiv Detail & Related papers (2021-12-08T12:57:13Z) - Persistent Homology Captures the Generalization of Neural Networks
Without A Validation Set [0.0]
We suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology.
Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process.
Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy.
arXiv Detail & Related papers (2021-05-31T09:17:31Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - The efficiency of deep learning algorithms for detecting anatomical
reference points on radiological images of the head profile [55.41644538483948]
A U-Net neural network allows performing the detection of anatomical reference points more accurately than a fully convolutional neural network.
The results of the detection of anatomical reference points by the U-Net neural network are closer to the average results of the detection of reference points by a group of orthodontists.
arXiv Detail & Related papers (2020-05-25T13:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.