Temporal support vectors for spiking neuronal networks
- URL: http://arxiv.org/abs/2205.14544v1
- Date: Sat, 28 May 2022 23:47:15 GMT
- Title: Temporal support vectors for spiking neuronal networks
- Authors: Ran Rubin and Haim Sompolinsky
- Abstract summary: We introduce a novel extension of the static Support Vector Machine (T-SVM)
We show that T-SVM and its kernel extensions generate robust synaptic weight vectors in spiking neurons.
We propose T-SVM with nonlinear kernels as a new model of the computational role of the nonlinearities and extensive morphologies of neuronal dendritic trees.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When neural circuits learn to perform a task, it is often the case that there
are many sets of synaptic connections that are consistent with the task.
However, only a small number of possible solutions are robust to noise in the
input and are capable of generalizing their performance of the task to new
inputs. Finding such good solutions is an important goal of learning systems in
general and neuronal circuits in particular. For systems operating with static
inputs and outputs, a well known approach to the problem is the large margin
methods such as Support Vector Machines (SVM). By maximizing the distance of
the data vectors from the decision surface, these solutions enjoy increased
robustness to noise and enhanced generalization abilities. Furthermore, the use
of the kernel method enables SVMs to perform classification tasks that require
nonlinear decision surfaces. However, for dynamical systems with event based
outputs, such as spiking neural networks and other continuous time threshold
crossing systems, this optimality criterion is inapplicable due to the strong
temporal correlations in their input and output. We introduce a novel extension
of the static SVMs - The Temporal Support Vector Machine (T-SVM). The T-SVM
finds a solution that maximizes a new construct - the dynamical margin. We show
that T-SVM and its kernel extensions generate robust synaptic weight vectors in
spiking neurons and enable their learning of tasks that require nonlinear
spatial integration of synaptic inputs. We propose T-SVM with nonlinear kernels
as a new model of the computational role of the nonlinearities and extensive
morphologies of neuronal dendritic trees.
Related papers
- Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences [60.489682735061415]
We propose CHELA, which replaces state space models with short-long convolutions and implements linear attention in a divide-and-conquer manner.
Our experiments on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-06-12T12:12:38Z) - Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware [3.961418890143814]
We describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs.
We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix.
This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware.
arXiv Detail & Related papers (2024-05-02T14:11:50Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Amplifying Sine Unit: An Oscillatory Activation Function for Deep Neural
Networks to Recover Nonlinear Oscillations Efficiently [0.0]
In this work, we put forward a methodology based on deep neural networks with responsive layers structure to deal nonlinear oscillations in microelectromechanical systems.
We have proposed a novel oscillatory activation function called Amplifying Sine Unit denoted as ASU which is more efficient than GCU for complex vibratory systems.
Results show that the designed network with our proposed activation function ASU is more reliable and robust to handle the challenges posed by nonlinearity and oscillations.
arXiv Detail & Related papers (2023-04-18T14:08:15Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training very large scale nonlinear SVMs using Alternating Direction
Method of Multipliers coupled with the Hierarchically Semi-Separable kernel
approximations [0.0]
nonlinear Support Vector Machines (SVMs) produce significantly higher classification quality when compared to linear ones.
Their computational complexity is prohibitive for large-scale datasets.
arXiv Detail & Related papers (2021-08-09T16:52:04Z) - Nonlinear computations in spiking neural networks through multiplicative
synapses [3.1498833540989413]
nonlinear computations can be implemented successfully in spiking neural networks.
This requires supervised training and the resulting connectivity can be hard to interpret.
We show how to directly derive the required connectivity for several nonlinear dynamical systems.
arXiv Detail & Related papers (2020-09-08T16:47:27Z) - Theory of gating in recurrent neural networks [5.672132510411465]
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience.
Here, we show that gating offers flexible control of two salient features of the collective dynamics.
The gate controlling timescales leads to a novel, marginally stable state, where the network functions as a flexible integrator.
arXiv Detail & Related papers (2020-07-29T13:20:58Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.