Adaptive Robotic Arm Control with a Spiking Recurrent Neural Network on a Digital Accelerator
- URL: http://arxiv.org/abs/2405.12849v2
- Date: Sun, 2 Jun 2024 18:57:00 GMT
- Title: Adaptive Robotic Arm Control with a Spiking Recurrent Neural Network on a Digital Accelerator
- Authors: Alejandro Linares-Barranco, Luciano Prono, Robert Lengenstein, Giacomo Indiveri, Charlotte Frenkel,
- Abstract summary: We present an overview of the system, and a Python framework to use it on a Pynq ZU platform.
We show how the simulated accuracy is preserved with a peak performance of 3.8M events processed per second.
- Score: 41.60361484397962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rise of artificial intelligence, neural network simulations of biological neuron models are being explored to reduce the footprint of learning and inference in resource-constrained task scenarios. A mainstream type of such networks are spiking neural networks (SNNs) based on simplified Integrate and Fire models for which several hardware accelerators have emerged. Among them, the ReckOn chip was introduced as a recurrent SNN allowing for both online training and execution of tasks based on arbitrary sensory modalities, demonstrated for vision, audition, and navigation. As a fully digital and open-source chip, we adapted ReckOn to be implemented on a Xilinx Multiprocessor System on Chip system (MPSoC), facilitating its deployment in embedded systems and increasing the setup flexibility. We present an overview of the system, and a Python framework to use it on a Pynq ZU platform. We validate the architecture and implementation in the new scenario of robotic arm control, and show how the simulated accuracy is preserved with a peak performance of 3.8M events processed per second.
Related papers
- Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural
Networks [0.08965418284317034]
Spiking Neural Networks (SNNs) offer to enhance energy efficiency through a reduced and low-power hardware footprint.
This paper introduces Spyx, a new and lightweight SNN simulation and optimization library designed in JAX.
arXiv Detail & Related papers (2024-02-29T09:46:44Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous
spiking neural network processor [2.9175555050594975]
We present a brain-inspired platform for prototyping real-time event-based Spiking Neural Networks (SNNs)
The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays.
The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
arXiv Detail & Related papers (2023-10-01T03:48:16Z) - Evolving Connectivity for Recurrent Spiking Neural Networks [8.80300633999542]
Recurrent neural networks (RSNNs) hold great potential for advancing artificial general intelligence.
We propose the evolving connectivity (EC) framework, an inference-only method for training RSNNs.
arXiv Detail & Related papers (2023-05-28T07:08:25Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - PulseDL-II: A System-on-Chip Neural Network Accelerator for Timing and
Energy Extraction of Nuclear Detector Signals [3.307097167756987]
We introduce PulseDL-II, a system-on-chip (SoC) specially designed for applications of event feature (time, energy, etc.) extraction from pulses with deep learning.
The proposed system achieved 60 ps time resolution and 0.40% energy resolution with online neural network inference at signal to noise ratio (SNR) of 47.4 dB.
arXiv Detail & Related papers (2022-09-02T08:52:21Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.