Self-organization of multi-layer spiking neural networks
- URL: http://arxiv.org/abs/2006.06902v1
- Date: Fri, 12 Jun 2020 01:44:48 GMT
- Title: Self-organization of multi-layer spiking neural networks
- Authors: Guruprasad Raghavan, Cong Lin, Matt Thomson
- Abstract summary: A key mechanism that enables the formation of complex architecture in the developing brain is the emergence of traveling-temporal waves of neuronal activity.
We propose a modular tool-kit in the form of a dynamical system that can be seamlessly stacked to assemble multi-layer neural networks.
Our framework leads to the self-organization of a wide variety of architectures, ranging from multi-layer perceptrons to autoencoders.
- Score: 4.859525864236446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Living neural networks in our brains autonomously self-organize into large,
complex architectures during early development to result in an organized and
functional organic computational device. A key mechanism that enables the
formation of complex architecture in the developing brain is the emergence of
traveling spatio-temporal waves of neuronal activity across the growing brain.
Inspired by this strategy, we attempt to efficiently self-organize large neural
networks with an arbitrary number of layers into a wide variety of
architectures. To achieve this, we propose a modular tool-kit in the form of a
dynamical system that can be seamlessly stacked to assemble multi-layer neural
networks. The dynamical system encapsulates the dynamics of spiking units,
their inter/intra layer interactions as well as the plasticity rules that
control the flow of information between layers. The key features of our
tool-kit are (1) autonomous spatio-temporal waves across multiple layers
triggered by activity in the preceding layer and (2) Spike-timing dependent
plasticity (STDP) learning rules that update the inter-layer connectivity based
on wave activity in the connecting layers. Our framework leads to the
self-organization of a wide variety of architectures, ranging from multi-layer
perceptrons to autoencoders. We also demonstrate that emergent waves can
self-organize spiking network architecture to perform unsupervised learning,
and networks can be coupled with a linear classifier to perform classification
on classic image datasets like MNIST. Broadly, our work shows that a dynamical
systems framework for learning can be used to self-organize large computational
devices.
Related papers
- The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks [3.9848584845601014]
We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA)
DNA relies on recurrence-stabilized networks and discuss it in application to vision.
arXiv Detail & Related papers (2024-07-08T06:22:10Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks [4.041732967881764]
Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest.
These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand.
We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series.
arXiv Detail & Related papers (2024-05-19T23:35:06Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - On the effectiveness of neural priors in modeling dynamical systems [28.69155113611877]
We discuss the architectural regularization that neural networks offer when learning such systems.
We show that simple coordinate networks with few layers can be used to solve multiple problems in modelling dynamical systems.
arXiv Detail & Related papers (2023-03-10T06:21:24Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Automated Search for Resource-Efficient Branched Multi-Task Networks [81.48051635183916]
We propose a principled approach, rooted in differentiable neural architecture search, to automatically define branching structures in a multi-task neural network.
We show that our approach consistently finds high-performing branching structures within limited resource budgets.
arXiv Detail & Related papers (2020-08-24T09:49:19Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.