RANC: Reconfigurable Architecture for Neuromorphic Computing
- URL: http://arxiv.org/abs/2011.00624v1
- Date: Sun, 1 Nov 2020 20:29:52 GMT
- Title: RANC: Reconfigurable Architecture for Neuromorphic Computing
- Authors: Joshua Mack, Ruben Purdy, Kris Rockowitz, Michael Inouye, Edward
Richter, Spencer Valancius, Nirmal Kumbhare, Md Sahil Hassan, Kaitlin Fair,
John Mixter, Ali Akoglu
- Abstract summary: We present RANC: a Reconfigurable Architecture for Neuromorphic Computing.
RANC enables rapid experimentation with neuromorphic architectures in both software via C++ simulation and hardware via FPGA emulation.
We show the utility of the RANC ecosystem by showing its ability to recreate behavior of the IBM's TrueNorth.
We demonstrate a neuromorphic architecture that scales to emulating 259K distinct neurons and 73.3M distinct synapses.
- Score: 1.1534748916340396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic architectures have been introduced as platforms for energy
efficient spiking neural network execution. The massive parallelism offered by
these architectures has also triggered interest from non-machine learning
application domains. In order to lift the barriers to entry for hardware
designers and application developers we present RANC: a Reconfigurable
Architecture for Neuromorphic Computing, an open-source highly flexible
ecosystem that enables rapid experimentation with neuromorphic architectures in
both software via C++ simulation and hardware via FPGA emulation. We present
the utility of the RANC ecosystem by showing its ability to recreate behavior
of the IBM's TrueNorth and validate with direct comparison to IBM's Compass
simulation environment and published literature. RANC allows optimizing
architectures based on application insights as well as prototyping future
neuromorphic architectures that can support new classes of applications
entirely. We demonstrate the highly parameterized and configurable nature of
RANC by studying the impact of architectural changes on improving application
mapping efficiency with quantitative analysis based on Alveo U250 FPGA. We
present post routing resource usage and throughput analysis across
implementations of Synthetic Aperture Radar classification and Vector Matrix
Multiplication applications, and demonstrate a neuromorphic architecture that
scales to emulating 259K distinct neurons and 73.3M distinct synapses.
Related papers
- A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - GPU-RANC: A CUDA Accelerated Simulation Framework for Neuromorphic Architectures [1.3401966602181168]
We introduce the GPU-based implementation of Reconfigurable Architecture for Neuromorphic Computing (RANC)
We demonstrate up to 780 times speedup compared to serial version of the RANC simulator based on a 512 neuromorphic core MNIST inference application.
arXiv Detail & Related papers (2024-04-24T21:08:21Z) - Mechanistic Design and Scaling of Hybrid Architectures [114.3129802943915]
We identify and test new hybrid architectures constructed from a variety of computational primitives.
We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis.
We find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures.
arXiv Detail & Related papers (2024-03-26T16:33:12Z) - AutoML for neuromorphic computing and application-driven co-design:
asynchronous, massively parallel optimization of spiking architectures [3.8937756915387505]
We have extended AutoML inspired approaches to the exploration and optimization of neuromorphic architectures.
We are able to efficiently explore the configuration space of neuromorphic architectures and identify the subset of conditions leading to the highest performance.
arXiv Detail & Related papers (2023-02-26T02:26:45Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - Multi-Exit Vision Transformer for Dynamic Inference [88.17413955380262]
We propose seven different architectures for early exit branches that can be used for dynamic inference in Vision Transformer backbones.
We show that each one of our proposed architectures could prove useful in the trade-off between accuracy and speed.
arXiv Detail & Related papers (2021-06-29T09:01:13Z) - NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration
with Spiking Neural Networks [3.9121275263540087]
We present NeuroXplorer, a framework that is based on a generalized template for modeling a neuromorphic architecture.
NeuroXplorer can perform both low-level cycle-accurate architectural simulations and high-level analysis with data-flow abstractions.
We demonstrate the architectural exploration capabilities of NeuroXplorer through case studies with many state-of-the-art machine learning models.
arXiv Detail & Related papers (2021-05-04T23:31:11Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.