Hardware-Friendly Synaptic Orders and Timescales in Liquid State
Machines for Speech Classification
- URL: http://arxiv.org/abs/2104.14264v1
- Date: Thu, 29 Apr 2021 11:20:39 GMT
- Title: Hardware-Friendly Synaptic Orders and Timescales in Liquid State
Machines for Speech Classification
- Authors: Vivek Saraswat, Ajinkya Gorad, Anand Naik, Aakash Patil, Udayan
Ganguly
- Abstract summary: Liquid State Machines are brain inspired spiking neural networks (SNNs) with random reservoir connectivity.
We analyze the role of synaptic orders namely: delta (high output for single time step), 0th (rectangular with a finite pulse width), 1st (exponential fall) and 2nd order (exponential rise and fall)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Liquid State Machines are brain inspired spiking neural networks (SNNs) with
random reservoir connectivity and bio-mimetic neuronal and synaptic models.
Reservoir computing networks are proposed as an alternative to deep neural
networks to solve temporal classification problems. Previous studies suggest
2nd order (double exponential) synaptic waveform to be crucial for achieving
high accuracy for TI-46 spoken digits recognition. The proposal of long-time
range (ms) bio-mimetic synaptic waveforms is a challenge to compact and power
efficient neuromorphic hardware. In this work, we analyze the role of synaptic
orders namely: {\delta} (high output for single time step), 0th (rectangular
with a finite pulse width), 1st (exponential fall) and 2nd order (exponential
rise and fall) and synaptic timescales on the reservoir output response and on
the TI-46 spoken digits classification accuracy under a more comprehensive
parameter sweep. We find the optimal operating point to be correlated to an
optimal range of spiking activity in the reservoir. Further, the proposed 0th
order synapses perform at par with the biologically plausible 2nd order
synapses. This is substantial relaxation for circuit designers as synapses are
the most abundant components in an in-memory implementation for SNNs. The
circuit benefits for both analog and mixed-signal realizations of 0th order
synapse are highlighted demonstrating 2-3 orders of savings in area and power
consumptions by eliminating Op-Amps and Digital to Analog Converter circuits.
This has major implications on a complete neural network implementation with
focus on peripheral limitations and algorithmic simplifications to overcome
them.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia.
Current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring.
We are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs.
arXiv Detail & Related papers (2024-09-06T17:48:44Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Spatiotemporal Spike-Pattern Selectivity in Single Mixed-Signal Neurons
with Balanced Synapses [0.27998963147546135]
Mixed-signal neuromorphic processors could be used for inference and learning.
We show how inhomogeneous synaptic circuits could be utilized for resource-efficient implementation of network layers.
arXiv Detail & Related papers (2021-06-10T12:04:03Z) - Inference with Artificial Neural Networks on Analog Neuromorphic
Hardware [0.0]
BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits.
System can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks.
arXiv Detail & Related papers (2020-06-23T17:25:06Z) - Surrogate gradients for analog neuromorphic computing [2.6475944316982942]
We show that learning self-corrects for device mismatch resulting in competitive spiking network performance on vision and speech benchmarks.
Our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware.
arXiv Detail & Related papers (2020-06-12T14:45:12Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.