Natively neuromorphic LMU architecture for encoding-free SNN-based HAR on commercial edge devices
- URL: http://arxiv.org/abs/2407.04076v2
- Date: Wed, 24 Jul 2024 14:59:06 GMT
- Title: Natively neuromorphic LMU architecture for encoding-free SNN-based HAR on commercial edge devices
- Authors: Vittorio Fra, Benedetto Leto, Andrea Pignata, Enrico Macii, Gianvito Urgese,
- Abstract summary: Neuromorphic models take inspiration from the human brain by adopting bio-plausible neuron models.
We present the L2MU, a native neuromorphic Legendre Memory Unit (LMU) which entirely relies on Leaky Integrate-and-Fire neurons.
We benchmarked our L2MU on smartwatch signals from hand-oriented activities, deploying it on three different commercial edge devices in compressed versions too.
- Score: 4.79664336929499
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic models take inspiration from the human brain by adopting bio-plausible neuron models to build alternatives to traditional Machine Learning (ML) and Deep Learning (DL) solutions. The scarce availability of dedicated hardware able to actualize the emulation of brain-inspired computation, which is otherwise only simulated, yet still hinders the wide adoption of neuromorphic computing for edge devices and embedded systems. With this premise, we adopt the perspective of neuromorphic computing for conventional hardware and we present the L2MU, a natively neuromorphic Legendre Memory Unit (LMU) which entirely relies on Leaky Integrate-and-Fire (LIF) neurons. Specifically, the original recurrent architecture of LMU has been redesigned by modelling every constituent element with neural populations made of LIF or Current-Based (CuBa) LIF neurons. To couple neuromorphic computing and off-the-shelf edge devices, we equipped the L2MU with an input module for the conversion of real values into spikes, which makes it an encoding-free implementation of a Recurrent Spiking Neural Network (RSNN) able to directly work with raw sensor signals on non-dedicated hardware. As a use case to validate our network, we selected the task of Human Activity Recognition (HAR). We benchmarked our L2MU on smartwatch signals from hand-oriented activities, deploying it on three different commercial edge devices in compressed versions too. The reported results remark the possibility of considering neuromorphic models not only in an exclusive relationship with dedicated hardware but also as a suitable choice to work with common sensors and devices.
Related papers
- A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - EvSegSNN: Neuromorphic Semantic Segmentation for Event Data [0.6138671548064356]
EvSegSNN is a biologically plausible encoder-decoder U-shaped architecture relying on Parametric Leaky Integrate and Fire neurons.
We introduce an end-to-end biologically inspired semantic segmentation approach by combining Spiking Neural Networks with event cameras.
Experiments conducted on DDD17 demonstrate that EvSegSNN outperforms the closest state-of-the-art model in terms of MIoU.
arXiv Detail & Related papers (2024-06-20T10:36:24Z) - An Integrated Toolbox for Creating Neuromorphic Edge Applications [3.671692919685993]
Spiking Neural Networks (SNNs) and neuromorphic models are more efficient and have more biological realism.
CARLsim++ is an integrated toolbox that enables fast and easy creation of neuromorphic applications.
arXiv Detail & Related papers (2024-04-12T16:34:55Z) - Neuromorphic Intermediate Representation: A Unified Instruction Set for Interoperable Brain-Inspired Computing [4.066607775161713]
Spiking neural networks and neuromorphic hardware platforms that simulate neuronal dynamics are getting wide attention.
Here, we establish a common reference frame for computations in digital neuromorphic systems.
We demonstrate by reproducing three spiking neural network models of different complexity across 7 neuromorphic simulators and 4 digital hardware platforms.
arXiv Detail & Related papers (2023-11-24T18:15:59Z) - Low Power Neuromorphic EMG Gesture Classification [3.8761525368152725]
Spiking Neural Networks (SNNs) are promising for low-power, real-time EMG gesture recognition.
We present low-power, high accuracy demonstration of EMG-signal based gesture recognition using neuromorphic Recurrent Spiking Neural Networks (RSNN)
Our network achieves state-of-the-art accuracy classification (90%) while using 53% than best reported art on Roshambo EMG dataset.
arXiv Detail & Related papers (2022-06-04T22:09:34Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.