Simple and complex spiking neurons: perspectives and analysis in a
simple STDP scenario
- URL: http://arxiv.org/abs/2207.04881v1
- Date: Tue, 28 Jun 2022 10:01:51 GMT
- Title: Simple and complex spiking neurons: perspectives and analysis in a
simple STDP scenario
- Authors: Davide Liberato Manna, Alex Vicente Sola, Paul Kirkland, Trevor Bihl,
Gaetano Di Caterina
- Abstract summary: Spiking neural networks (SNNs) are inspired by biology and neuroscience to create fast and efficient learning systems.
This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities.
We make a comparative study of three simple I&F neuron models, namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system.
- Score: 0.7829352305480283
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Spiking neural networks (SNNs) are largely inspired by biology and
neuroscience and leverage ideas and theories to create fast and efficient
learning systems. Spiking neuron models are adopted as core processing units in
neuromorphic systems because they enable event-based processing. The
integrate-and-fire (I&F) models are often adopted, with the simple Leaky I&F
(LIF) being the most used. The reason for adopting such models is their
efficiency and/or biological plausibility. Nevertheless, rigorous justification
for adopting LIF over other neuron models for use in artificial learning
systems has not yet been studied. This work considers various neuron models in
the literature and then selects computational neuron models that are
single-variable, efficient, and display different types of complexities. From
this selection, we make a comparative study of three simple I&F neuron models,
namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to
understand whether the use of more complex models increases the performance of
the system and whether the choice of a neuron model can be directed by the task
to be completed. Neuron models are tested within an SNN trained with
Spike-Timing Dependent Plasticity (STDP) on a classification task on the
N-MNIST and DVS Gestures datasets. Experimental results reveal that more
complex neurons manifest the same ability as simpler ones to achieve high
levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably
more hyper-parameter tuning. However, when the data possess richer
Spatio-temporal features, the QIF and EIF neuron models steadily achieve better
results. This suggests that accurately selecting the model based on the
richness of the feature spectrum of the data could improve the whole system's
performance. Finally, the code implementing the spiking neurons in the
SpykeTorch framework is made publicly available.
Related papers
- Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model [43.107778640669544]
Large Language Models (LLMs) are composed of neurons that exhibit various behaviors and roles.
Recent studies have revealed that not all neurons are active across different datasets.
We introduce Neuron-Level Fine-Tuning (NeFT), a novel approach that refines the granularity of parameter training down to the individual neuron.
arXiv Detail & Related papers (2024-03-18T09:55:01Z) - WaLiN-GUI: a graphical and auditory tool for neuron-based encoding [73.88751967207419]
Neuromorphic computing relies on spike-based, energy-efficient communication.
We develop a tool to identify suitable configurations for neuron-based encoding of sample-based data into spike trains.
The WaLiN-GUI is provided open source and with documentation.
arXiv Detail & Related papers (2023-10-25T20:34:08Z) - Unleashing the Potential of Spiking Neural Networks for Sequential
Modeling with Contextual Embedding [32.25788551849627]
Brain-inspired spiking neural networks (SNNs) have struggled to match their biological counterpart in modeling long-term temporal relationships.
This paper presents a novel Contextual Embedding Leaky Integrate-and-Fire (CE-LIF) spiking neuron model.
arXiv Detail & Related papers (2023-08-29T09:33:10Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Event-Driven Tactile Learning with Various Location Spiking Neurons [5.822511654546528]
Event-driven learning is still in its infancy due to the limited representation abilities of existing spiking neurons.
We propose a novel "location spiking neuron" model, which enables us to extract features of event-based data in a novel way.
By exploiting the novel location spiking neurons, we propose several models to capture complex tactile-temporal dependencies in the event-driven data.
arXiv Detail & Related papers (2022-10-09T14:49:27Z) - Modelling Neuronal Behaviour with Time Series Regression: Recurrent
Neural Networks on C. Elegans Data [0.0]
We show how the nervous system of C. Elegans can be modelled and simulated with data-driven models using different neural network architectures.
We show that GRU models with a hidden layer size of 4 units are able to accurately reproduce with high accuracy the system's response to very different stimuli.
arXiv Detail & Related papers (2021-07-01T10:39:30Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.