NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration
with Spiking Neural Networks
- URL: http://arxiv.org/abs/2105.01795v1
- Date: Tue, 4 May 2021 23:31:11 GMT
- Title: NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration
with Spiking Neural Networks
- Authors: Adarsha Balaji and Shihao Song and Twisha Titirsha and Anup Das and
Jeffrey Krichmar and Nikil Dutt and James Shackleford and Nagarajan Kandasamy
and Francky Catthoor
- Abstract summary: We present NeuroXplorer, a framework that is based on a generalized template for modeling a neuromorphic architecture.
NeuroXplorer can perform both low-level cycle-accurate architectural simulations and high-level analysis with data-flow abstractions.
We demonstrate the architectural exploration capabilities of NeuroXplorer through case studies with many state-of-the-art machine learning models.
- Score: 3.9121275263540087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, both industry and academia have proposed many different
neuromorphic architectures to execute applications that are designed with
Spiking Neural Network (SNN). Consequently, there is a growing need for an
extensible simulation framework that can perform architectural explorations
with SNNs, including both platform-based design of today's hardware, and
hardware-software co-design and design-technology co-optimization of the
future. We present NeuroXplorer, a fast and extensible framework that is based
on a generalized template for modeling a neuromorphic architecture that can be
infused with the specific details of a given hardware and/or technology.
NeuroXplorer can perform both low-level cycle-accurate architectural
simulations and high-level analysis with data-flow abstractions. NeuroXplorer's
optimization engine can incorporate hardware-oriented metrics such as energy,
throughput, and latency, as well as SNN-oriented metrics such as inter-spike
interval distortion and spike disorder, which directly impact SNN performance.
We demonstrate the architectural exploration capabilities of NeuroXplorer
through case studies with many state-of-the-art machine learning models.
Related papers
- A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - GPU-RANC: A CUDA Accelerated Simulation Framework for Neuromorphic Architectures [1.3401966602181168]
We introduce the GPU-based implementation of Reconfigurable Architecture for Neuromorphic Computing (RANC)
We demonstrate up to 780 times speedup compared to serial version of the RANC simulator based on a 512 neuromorphic core MNIST inference application.
arXiv Detail & Related papers (2024-04-24T21:08:21Z) - Mechanistic Design and Scaling of Hybrid Architectures [114.3129802943915]
We identify and test new hybrid architectures constructed from a variety of computational primitives.
We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis.
We find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures.
arXiv Detail & Related papers (2024-03-26T16:33:12Z) - Neural Architecture Codesign for Fast Bragg Peak Analysis [1.7081438846690533]
We develop an automated pipeline to streamline neural architecture codesign for fast, real-time Bragg peak analysis in microscopy.
Our method employs neural architecture search and AutoML to enhance these models, including hardware costs, leading to the discovery of more hardware-efficient neural architectures.
arXiv Detail & Related papers (2023-12-10T19:42:18Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Frameworks for SNNs: a Review of Data Science-oriented Software and an
Expansion of SpykeTorch [0.3425341633647624]
Spiking Neural Networks (SNNs) are specifically oriented towards data science applications.
This work reviews 9 frameworks for the development of Spiking Neural Networks (SNNs) that are specifically oriented towards data science applications.
arXiv Detail & Related papers (2023-02-15T12:35:53Z) - SPAIC: A Spike-based Artificial Intelligence Computing Framework [22.133585707508963]
We present a Python based spiking neural network (SNN) simulation and training framework, aka SPAIC.
It aims to support brain-inspired model and algorithm researches integrated with features from both deep learning and neuroscience.
We provide a range of examples including neural circuits, deep SNN learning and neuromorphic applications.
arXiv Detail & Related papers (2022-07-26T08:57:42Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - STONNE: A Detailed Architectural Simulator for Flexible Neural Network
Accelerators [5.326345912766044]
STONNE is a cycle-accurate, highly-modular and highly-extensible simulation framework.
We show how it can closely approach the performance results of the publicly available BSV-coded MAERI implementation.
arXiv Detail & Related papers (2020-06-10T19:20:52Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.