Hardware-Friendly Synaptic Orders and Timescales in Liquid State
Machines for Speech Classification
- URL: http://arxiv.org/abs/2104.14264v1
- Date: Thu, 29 Apr 2021 11:20:39 GMT
- Title: Hardware-Friendly Synaptic Orders and Timescales in Liquid State
Machines for Speech Classification
- Authors: Vivek Saraswat, Ajinkya Gorad, Anand Naik, Aakash Patil, Udayan
Ganguly
- Abstract summary: Liquid State Machines are brain inspired spiking neural networks (SNNs) with random reservoir connectivity.
We analyze the role of synaptic orders namely: delta (high output for single time step), 0th (rectangular with a finite pulse width), 1st (exponential fall) and 2nd order (exponential rise and fall)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Liquid State Machines are brain inspired spiking neural networks (SNNs) with
random reservoir connectivity and bio-mimetic neuronal and synaptic models.
Reservoir computing networks are proposed as an alternative to deep neural
networks to solve temporal classification problems. Previous studies suggest
2nd order (double exponential) synaptic waveform to be crucial for achieving
high accuracy for TI-46 spoken digits recognition. The proposal of long-time
range (ms) bio-mimetic synaptic waveforms is a challenge to compact and power
efficient neuromorphic hardware. In this work, we analyze the role of synaptic
orders namely: {\delta} (high output for single time step), 0th (rectangular
with a finite pulse width), 1st (exponential fall) and 2nd order (exponential
rise and fall) and synaptic timescales on the reservoir output response and on
the TI-46 spoken digits classification accuracy under a more comprehensive
parameter sweep. We find the optimal operating point to be correlated to an
optimal range of spiking activity in the reservoir. Further, the proposed 0th
order synapses perform at par with the biologically plausible 2nd order
synapses. This is substantial relaxation for circuit designers as synapses are
the most abundant components in an in-memory implementation for SNNs. The
circuit benefits for both analog and mixed-signal realizations of 0th order
synapse are highlighted demonstrating 2-3 orders of savings in area and power
consumptions by eliminating Op-Amps and Digital to Analog Converter circuits.
This has major implications on a complete neural network implementation with
focus on peripheral limitations and algorithmic simplifications to overcome
them.
Related papers
- Integrating programmable plasticity in experiment descriptions for analog neuromorphic hardware [0.9217021281095907]
The BrainScaleS-2 neuromorphic architecture has been designed to support "hybrid" plasticity.
observables that are expensive in numerical simulation, such as per-synapse correlation measurements, are implemented directly in the synapse circuits.
We introduce an integrated framework for describing spiking neural network experiments and plasticity rules in a unified high-level experiment description language.
arXiv Detail & Related papers (2024-12-04T08:46:06Z) - NeoHebbian Synapses to Accelerate Online Training of Neuromorphic Hardware [0.03377254151446239]
A novel neoHebbian artificial synapse utilizing ReRAM devices has been proposed and experimentally validated.
System-level simulations, accounting for various device and system-level non-idealities, confirm that these synapses offer a robust solution for the fast, compact, and energy-efficient implementation of advanced learning rules in neuromorphic hardware.
arXiv Detail & Related papers (2024-11-27T12:06:15Z) - Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
Neuromorphic computing uses spiking neural networks (SNNs) to perform inference tasks.
embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption.
split computing - where an SNN is partitioned across two devices - is a promising solution.
This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia.
Current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring.
We are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs.
arXiv Detail & Related papers (2024-09-06T17:48:44Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Spatiotemporal Spike-Pattern Selectivity in Single Mixed-Signal Neurons
with Balanced Synapses [0.27998963147546135]
Mixed-signal neuromorphic processors could be used for inference and learning.
We show how inhomogeneous synaptic circuits could be utilized for resource-efficient implementation of network layers.
arXiv Detail & Related papers (2021-06-10T12:04:03Z) - Inference with Artificial Neural Networks on Analog Neuromorphic
Hardware [0.0]
BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits.
System can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks.
arXiv Detail & Related papers (2020-06-23T17:25:06Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.