Decomposing spiking neural networks with Graphical Neural Activity
Threads
- URL: http://arxiv.org/abs/2306.16684v1
- Date: Thu, 29 Jun 2023 05:10:11 GMT
- Title: Decomposing spiking neural networks with Graphical Neural Activity
Threads
- Authors: Bradley H. Theilman, Felix Wang, Fred Rothganger, James B. Aimone
- Abstract summary: We introduce techniques for analyzing spiking neural networks that decompose neural activity into multiple, disjoint, parallel threads of activity.
We find that this graph of spiking activity naturally decomposes into disjoint connected components that overlap in space and time.
We provide an efficient algorithm for finding analogous threads that reoccur in large spiking datasets, revealing that seemingly distinct spike trains are composed of similar underlying threads of activity.
- Score: 0.734084539365505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A satisfactory understanding of information processing in spiking neural
networks requires appropriate computational abstractions of neural activity.
Traditionally, the neural population state vector has been the most common
abstraction applied to spiking neural networks, but this requires artificially
partitioning time into bins that are not obviously relevant to the network
itself. We introduce a distinct set of techniques for analyzing spiking neural
networks that decomposes neural activity into multiple, disjoint, parallel
threads of activity. We construct these threads by estimating the degree of
causal relatedness between pairs of spikes, then use these estimates to
construct a directed acyclic graph that traces how the network activity evolves
through individual spikes. We find that this graph of spiking activity
naturally decomposes into disjoint connected components that overlap in space
and time, which we call Graphical Neural Activity Threads (GNATs). We provide
an efficient algorithm for finding analogous threads that reoccur in large
spiking datasets, revealing that seemingly distinct spike trains are composed
of similar underlying threads of activity, a hallmark of compositionality. The
picture of spiking neural networks provided by our GNAT analysis points to new
abstractions for spiking neural computation that are naturally adapted to the
spatiotemporally distributed dynamics of spiking neural networks.
Related papers
- Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Spike-based computation using classical recurrent neural networks [1.9171404264679484]
Spiking neural networks are artificial neural networks in which communication between neurons is only made of events, also called spikes.
We modify the dynamics of a well-known, easily trainable type of recurrent neural network to make it event-based.
We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark.
arXiv Detail & Related papers (2023-06-06T12:19:12Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Deep Spiking Convolutional Neural Network for Single Object Localization
Based On Deep Continuous Local Learning [0.0]
We propose a deep convolutional spiking neural network for the localization of a single object in a grayscale image.
Results reported on Oxford-IIIT-Pet validates the exploitation of spiking neural networks with a supervised learning approach.
arXiv Detail & Related papers (2021-05-12T12:02:05Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z) - Neural Rule Ensembles: Encoding Sparse Feature Interactions into Neural
Networks [3.7277730514654555]
We use decision trees to capture relevant features and their interactions and define a mapping to encode extracted relationships into a neural network.
At the same time through feature selection it enables learning of compact representations compared to state of the art tree-based approaches.
arXiv Detail & Related papers (2020-02-11T11:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.