hxtorch: PyTorch for BrainScaleS-2 -- Perceptrons on Analog Neuromorphic
Hardware
- URL: http://arxiv.org/abs/2006.13138v3
- Date: Wed, 1 Jul 2020 08:17:31 GMT
- Title: hxtorch: PyTorch for BrainScaleS-2 -- Perceptrons on Analog Neuromorphic
Hardware
- Authors: Philipp Spilger, Eric M\"uller, Arne Emmel, Aron Leibfried, Christian
Mauch, Christian Pehle, Johannes Weis, Oliver Breitwieser, Sebastian
Billaudelle, Sebastian Schmitt, Timo C. Wunderlich, Yannik Stradmann,
Johannes Schemmel
- Abstract summary: We present software facilitating the usage of the BrainScaleS-2 analog neuromorphic hardware system as an inference accelerator.
We provide accelerator support for vector-matrix multiplications and convolutions and corresponding software-based autograd functionality.
As an application of the introduced framework, we present a model that classifies activities of daily living with smartphone sensor data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present software facilitating the usage of the BrainScaleS-2 analog
neuromorphic hardware system as an inference accelerator for artificial neural
networks. The accelerator hardware is transparently integrated into the PyTorch
machine learning framework using its extension interface. In particular, we
provide accelerator support for vector-matrix multiplications and convolutions;
corresponding software-based autograd functionality is provided for
hardware-in-the-loop training. Automatic partitioning of neural networks onto
one or multiple accelerator chips is supported. We analyze implementation
runtime overhead during training as well as inference, provide measurements for
existing setups and evaluate the results in terms of the accelerator hardware
design limitations. As an application of the introduced framework, we present a
model that classifies activities of daily living with smartphone sensor data.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling
on BrainScaleS-2 [0.0]
hxtorch.snn is a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system.
hxtorch.snn enables the hardware-in-the-loop training of spiking neural networks within PyTorch.
We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset.
arXiv Detail & Related papers (2022-12-23T08:56:44Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware [0.0]
This work presents the software aspects of the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling.
We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation.
The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development.
arXiv Detail & Related papers (2022-03-21T16:30:18Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - DFSynthesizer: Dataflow-based Synthesis of Spiking Neural Networks to
Neuromorphic Hardware [4.273223677453178]
Spiking Neural Networks (SNN) are an emerging computation model, which uses event-driven activation and bio-inspired learning algorithms.
DF Synthesizer is an end-to-end framework for synthesizing SNN-based machine learning programs to neuromorphic hardware.
arXiv Detail & Related papers (2021-08-04T12:49:37Z) - Surrogate gradients for analog neuromorphic computing [2.6475944316982942]
We show that learning self-corrects for device mismatch resulting in competitive spiking network performance on vision and speech benchmarks.
Our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware.
arXiv Detail & Related papers (2020-06-12T14:45:12Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for
Compute-in-Memory Accelerators for On-chip Training [4.555081317066413]
NeuroSim is an integrated framework to benchmark compute-in-memory (CIM) accelerators for deep neural networks.
A python wrapper is developed to interface NeuroSim with a popular machine learning platform: Pytorch.
arXiv Detail & Related papers (2020-03-13T20:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.