Dynamic Training of Liquid State Machines
- URL: http://arxiv.org/abs/2302.03506v2
- Date: Sun, 10 Sep 2023 00:31:19 GMT
- Title: Dynamic Training of Liquid State Machines
- Authors: Pavithra Koralalage, Ireoluwa Fakeye, Pedro Machado, Jason Smith,
Isibor Kennedy Ihianle, Salisu Wada Yahaya, Andreas Oikonomou, Ahmad Lotfi
- Abstract summary: Spiking Neural Networks (SNNs) emerged as a promising solution in the field of Artificial Neural Networks (ANNs)
This research aimed to optimise the training process of Liquid State Machines (LSMs), a recurrent architecture of SNNs.
The experimental results showed that by using spike metrics and a range of weights, the desired output and the actual output of spiking neurons could be effectively optimised.
- Score: 2.622806745192486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) emerged as a promising solution in the field
of Artificial Neural Networks (ANNs), attracting the attention of researchers
due to their ability to mimic the human brain and process complex information
with remarkable speed and accuracy. This research aimed to optimise the
training process of Liquid State Machines (LSMs), a recurrent architecture of
SNNs, by identifying the most effective weight range to be assigned in SNN to
achieve the least difference between desired and actual output. The
experimental results showed that by using spike metrics and a range of weights,
the desired output and the actual output of spiking neurons could be
effectively optimised, leading to improved performance of SNNs. The results
were tested and confirmed using three different weight initialisation
approaches, with the best results obtained using the Barabasi-Albert random
graph method.
Related papers
- Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation [20.34272550256856]
Spiking neural networks (SNNs) mimic biological neural system to convey information via discrete spikes.
Our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets.
arXiv Detail & Related papers (2024-07-12T08:17:24Z) - Benchmarking Spiking Neural Network Learning Methods with Varying
Locality [2.323924801314763]
Spiking Neural Networks (SNNs) provide more realistic neuronal dynamics.
Information is processed as spikes within SNNs in an event-based mechanism.
We show that training SNNs is challenging due to the non-differentiable nature of the spiking mechanism.
arXiv Detail & Related papers (2024-02-01T19:57:08Z) - Artificial to Spiking Neural Networks Conversion for Scientific Machine
Learning [24.799635365988905]
We introduce a method to convert Physics-Informed Neural Networks (PINNs) to Spiking Neural Networks (SNNs)
SNNs are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs)
arXiv Detail & Related papers (2023-08-31T00:21:27Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Skip Connections in Spiking Neural Networks: An Analysis of Their Effect
on Network Training [0.8602553195689513]
Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs)
In this paper, we study the impact of skip connections on SNNs and propose a hyper parameter optimization technique that adapts models from ANN to SNN.
We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs.
arXiv Detail & Related papers (2023-03-23T07:57:32Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Energy-efficient Knowledge Distillation for Spiking Neural Networks [23.16389219900427]
Spiking neural networks (SNNs) have been gaining interest as energy-efficient alternatives of conventional artificial neural networks (ANNs)
We analyze the performance of distilled SNN model in terms of accuracy and energy efficiency.
We propose a novel knowledge distillation method with heterogeneous temperature parameters to achieve energy efficiency.
arXiv Detail & Related papers (2021-06-14T05:42:05Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.