Approximating nonlinear functions with latent boundaries in low-rank
excitatory-inhibitory spiking networks
- URL: http://arxiv.org/abs/2307.09334v3
- Date: Thu, 28 Dec 2023 15:40:08 GMT
- Title: Approximating nonlinear functions with latent boundaries in low-rank
excitatory-inhibitory spiking networks
- Authors: William F. Podlaski, Christian K. Machens
- Abstract summary: We put forth a new framework for spike-based excitatory-inhibitory spiking networks.
Our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
- Score: 5.955727366271805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep feedforward and recurrent rate-based neural networks have become
successful functional models of the brain, but they neglect obvious biological
details such as spikes and Dale's law. Here we argue that these details are
crucial in order to understand how real neural circuits operate. Towards this
aim, we put forth a new framework for spike-based computation in low-rank
excitatory-inhibitory spiking networks. By considering populations with rank-1
connectivity, we cast each neuron's spiking threshold as a boundary in a
low-dimensional input-output space. We then show how the combined thresholds of
a population of inhibitory neurons form a stable boundary in this space, and
those of a population of excitatory neurons form an unstable boundary.
Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI)
network with inhibition-stabilized dynamics at the intersection of the two
boundaries. The computation of the resulting networks can be understood as the
difference of two convex functions and is thereby capable of approximating
arbitrary non-linear input-output mappings. We demonstrate several properties
of these networks, including noise suppression and amplification, irregular
activity and synaptic balance, as well as how they relate to rate network
dynamics in the limit that the boundary becomes soft. Finally, while our work
focuses on small networks (5-50 neurons), we discuss potential avenues for
scaling up to much larger networks. Overall, our work proposes a new
perspective on spiking networks that may serve as a starting point for a
mechanistic understanding of biological spike-based computation.
Related papers
- Biologically-Informed Excitatory and Inhibitory Balance for Robust Spiking Neural Network Training [0.40498500266986387]
Spiking neural networks drawing inspiration from biological constraints of the brain promise an energy-efficient paradigm for artificial intelligence.
In this work, we identify several key factors, such as low initial firing rates and diverse inhibitory spiking patterns, that determine the ability to train spiking networks.
The results indicate networks with the biologically realistic 80:20 excitatory:inhibitory balance can reliably train at low activity levels and in noisy environments.
arXiv Detail & Related papers (2024-04-24T03:29:45Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Exploring the Approximation Capabilities of Multiplicative Neural
Networks for Smooth Functions [9.936974568429173]
We consider two classes of target functions: generalized bandlimited functions and Sobolev-Type balls.
Our results demonstrate that multiplicative neural networks can approximate these functions with significantly fewer layers and neurons.
These findings suggest that multiplicative gates can outperform standard feed-forward layers and have potential for improving neural network design.
arXiv Detail & Related papers (2023-01-11T17:57:33Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Linear approximability of two-layer neural networks: A comprehensive
analysis based on spectral decay [4.042159113348107]
We first consider the case of single neuron and show that the linear approximability, quantified by the Kolmogorov width, is controlled by the eigenvalue decay of an associate kernel.
We show that similar results also hold for two-layer neural networks.
arXiv Detail & Related papers (2021-08-10T23:30:29Z) - Rich dynamics caused by known biological brain network features
resulting in stateful networks [0.0]
Internal state of a neuron/network becomes a defining factor for how information is represented within the network.
In this study we assessed the impact of varying specific intrinsic parameters of the neurons that enriched network state dynamics.
We found such effects were more profound in sparsely connected networks than in densely connected networks.
arXiv Detail & Related papers (2021-06-03T08:32:43Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.