Adaptive conversion of real-valued input into spike trains
- URL: http://arxiv.org/abs/2104.05401v1
- Date: Mon, 12 Apr 2021 12:33:52 GMT
- Title: Adaptive conversion of real-valued input into spike trains
- Authors: Alexander Hadjiivanov
- Abstract summary: This paper presents a biologically plausible method for converting real-valued input into spike trains for processing with spiking neural networks.
The proposed method mimics the adaptive behaviour of retinal ganglion cells and allows input neurons to adapt their response to changes in the statistics of the input.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a biologically plausible method for converting
real-valued input into spike trains for processing with spiking neural
networks. The proposed method mimics the adaptive behaviour of retinal ganglion
cells and allows input neurons to adapt their response to changes in the
statistics of the input. Thus, rather than passively receiving values and
forwarding them to the hidden and output layers, the input layer acts as a
self-regulating filter which emphasises deviations from the average while
allowing the input neurons to become effectively desensitised to the average
itself. Another merit of the proposed method is that it requires only one input
neuron per variable, rather than an entire population of neurons as in the case
of the commonly used conversion method based on Gaussian receptive fields. In
addition, since the statistics of the input emerge naturally over time, it
becomes unnecessary to pre-process the data before feeding it to the network.
This enables spiking neural networks to process raw, non-normalised streaming
data. A proof-of-concept experiment is performed to demonstrate that the
proposed method operates as expected.
Related papers
- Out-of-Distribution Detection using Neural Activation Prior [15.673290330356194]
Out-of-distribution detection (OOD) is a crucial technique for deploying machine learning models in the real world.
We propose a simple yet effective Neural Activation Prior (NAP) for OOD detection.
Our method achieves the state-of-the-art performance on CIFAR benchmark and ImageNet dataset.
arXiv Detail & Related papers (2024-02-28T08:45:07Z) - WaLiN-GUI: a graphical and auditory tool for neuron-based encoding [73.88751967207419]
Neuromorphic computing relies on spike-based, energy-efficient communication.
We develop a tool to identify suitable configurations for neuron-based encoding of sample-based data into spike trains.
The WaLiN-GUI is provided open source and with documentation.
arXiv Detail & Related papers (2023-10-25T20:34:08Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Using Linear Regression for Iteratively Training Neural Networks [4.873362301533824]
We present a simple linear regression based approach for learning the weights and biases of a neural network.
The approach is intended to be to larger, more complex architectures.
arXiv Detail & Related papers (2023-07-11T11:53:25Z) - Evolving Neural Selection with Adaptive Regularization [7.298440208725654]
We show a method in which the selection of neurons in deep neural networks evolves, adapting to the difficulty of prediction.
We propose the Adaptive Neural Selection (ANS) framework, which evolves to weigh neurons in a layer to form network variants.
Experimental results show that the proposed method can significantly improve the performance of commonly-used neural network architectures.
arXiv Detail & Related papers (2022-04-04T17:19:52Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - L4-Norm Weight Adjustments for Converted Spiking Neural Networks [6.417011237981518]
Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency benefits.
Non-spiking artificial neural networks are typically trained with gradient descent using backpropagation.
One common technique is to train a spiking neural network and then convert it to an spiking network.
arXiv Detail & Related papers (2021-11-17T23:33:20Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - The Compact Support Neural Network [6.47243430672461]
We present a neuron generalization that has the standard dot-product-based neuron and the RBF neuron as two extreme cases of a shape parameter.
We show how to avoid difficulties in training a neural network with such neurons, by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value.
arXiv Detail & Related papers (2021-04-01T06:08:09Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.