A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning
with Spike-Based Retinas
- URL: http://arxiv.org/abs/2005.06318v1
- Date: Wed, 13 May 2020 13:47:44 GMT
- Title: A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning
with Spike-Based Retinas
- Authors: Charlotte Frenkel, Jean-Didier Legat, David Bol
- Abstract summary: We propose SPOON, a 28-nm event-driven CNN (eCNN) for adaptive edge computing and vision applications.
It embeds online learning with only 16.8-% power and 11.8-% area overheads with the biologically-plausible direct random target projection (DRTP) algorithm.
With an energy per classification of 313nJ at 0.6V and a 0.32-mm$2$ area for accuracies of 95.3% (on-chip training) and 97.5% (off-chip training) on MNIST, we demonstrate that SPOON reaches the efficiency of conventional machine learning accelerators.
- Score: 1.4425878137951236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In an attempt to follow biological information representation and
organization principles, the field of neuromorphic engineering is usually
approached bottom-up, from the biophysical models to large-scale integration in
silico. While ideal as experimentation platforms for cognitive computing and
neuroscience, bottom-up neuromorphic processors have yet to demonstrate an
efficiency advantage compared to specialized neural network accelerators for
real-world problems. Top-down approaches aim at answering this difficulty by
(i) starting from the applicative problem and (ii) investigating how to make
the associated algorithms hardware-efficient and biologically-plausible. In
order to leverage the data sparsity of spike-based neuromorphic retinas for
adaptive edge computing and vision applications, we follow a top-down approach
and propose SPOON, a 28-nm event-driven CNN (eCNN). It embeds online learning
with only 16.8-% power and 11.8-% area overheads with the
biologically-plausible direct random target projection (DRTP) algorithm. With
an energy per classification of 313nJ at 0.6V and a 0.32-mm$^2$ area for
accuracies of 95.3% (on-chip training) and 97.5% (off-chip training) on MNIST,
we demonstrate that SPOON reaches the efficiency of conventional machine
learning accelerators while embedding on-chip learning and being compatible
with event-based sensors, a point that we further emphasize with N-MNIST
benchmarking.
Related papers
- Improving physics-informed neural network extrapolation via transfer learning and adaptive activation functions [44.44497277876625]
Physics-Informed Neural Networks (PINNs) are deep learning models that incorporate the governing physical laws of a system into the learning process.<n>We introduce a transfer learning (TL) method to improve the extrapolation capability of PINNs.<n>We demonstrate that our method achieves an average of 40% reduction in relative L2 error and an average of 50% reduction in mean absolute error.
arXiv Detail & Related papers (2025-07-16T22:19:53Z) - Embedded FPGA Acceleration of Brain-Like Neural Networks: Online Learning to Scalable Inference [0.0]
We present the first embedded FPGA accelerator for BCPNN on a Zynq UltraScale+ system using High-Level Synthesis.<n>Our accelerator achieves up to 17.5x latency and 94% energy savings over ARM baselines, without sacrificing accuracy.<n>This work enables practical neuromorphic computing on edge devices, bridging the gap between brain-like learning and real-world deployment.
arXiv Detail & Related papers (2025-06-23T11:35:20Z) - Enabling Efficient Processing of Spiking Neural Networks with On-Chip Learning on Commodity Neuromorphic Processors for Edge AI Systems [5.343921650701002]
spiking neural network (SNN) algorithms on neuromorphic processors offer ultra-low power/energy AI computation.
We propose a design methodology to enable efficient SNN processing on commodity neuromorphic processors.
arXiv Detail & Related papers (2025-04-01T16:52:03Z) - Event-Driven Implementation of a Physical Reservoir Computing Framework for superficial EMG-based Gesture Recognition [2.222098162797332]
This paper explores a novel neuromorphic implementation approach for gesture recognition by extracting spiking information from surface electromyography (sEMG) data in an event-driven manner.
The network was designed by implementing a simple-structured and hardware-friendly Physical Reservoir Computing framework called Rotating Neuron Reservoir (RNR) within the domain of Spiking neural network (SNN)
The proposed system was validated by an open-access large-scale sEMG database and achieved an average classification accuracy of 74.6% and 80.3%.
arXiv Detail & Related papers (2025-03-10T17:18:14Z) - Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors [0.8426358786287627]
In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
arXiv Detail & Related papers (2024-08-14T17:51:20Z) - Topology Optimization of Random Memristors for Input-Aware Dynamic SNN [44.38472635536787]
We introduce pruning optimization for input-aware dynamic memristive spiking neural network (PRIME)
Signal representation-wise, PRIME employs leaky integrate-and-fire neurons to emulate the brain's inherent spiking mechanism.
For reconfigurability, inspired by the brain's dynamic adjustment of computational depth, PRIME employs an input-aware dynamic early stop policy.
arXiv Detail & Related papers (2024-07-26T09:35:02Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Evaluating Spiking Neural Network On Neuromorphic Platform For Human
Activity Recognition [2.710807780228189]
Energy efficiency and low latency are crucial requirements for wearable AI-empowered human activity recognition systems.
Spike-based workouts recognition system can achieve a comparable accuracy to popular milliwatt RISC-V bases multi-core processor GAP8 with a traditional neural network.
arXiv Detail & Related papers (2023-08-01T18:59:06Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based
on Predictive Coding in Spiking Neural Networks [1.6172800007896282]
We propose a novel learning algorithm inspired by predictive coding theory.
We show that it can perform supervised learning fully autonomously and successfully as the backprop.
This method achieves a favorable performance compared to the state-of-the-art multi-layer SNNs.
arXiv Detail & Related papers (2022-11-24T09:56:02Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - BioGrad: Biologically Plausible Gradient-Based Learning for Spiking
Neural Networks [0.0]
Spiking neural networks (SNN) are delivering energy-efficient, massively parallel, and low-latency solutions to AI problems.
To harness these computational benefits, SNN need to be trained by learning algorithms that adhere to brain-inspired neuromorphic principles.
We propose a biologically plausible gradient-based learning algorithm for SNN that is functionally equivalent to backprop.
arXiv Detail & Related papers (2021-10-27T00:07:25Z) - In-Hardware Learning of Multilayer Spiking Neural Networks on a
Neuromorphic Processor [6.816315761266531]
This work presents a spike-based backpropagation algorithm with biological plausible local update rules and adapts it to fit the constraint in a neuromorphic hardware.
The algorithm is implemented on Intel Loihi chip enabling low power in- hardware supervised online learning of multilayered SNNs for mobile applications.
arXiv Detail & Related papers (2021-05-08T09:22:21Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.