Neuromorphic Intermediate Representation: A Unified Instruction Set for
Interoperable Brain-Inspired Computing
- URL: http://arxiv.org/abs/2311.14641v1
- Date: Fri, 24 Nov 2023 18:15:59 GMT
- Title: Neuromorphic Intermediate Representation: A Unified Instruction Set for
Interoperable Brain-Inspired Computing
- Authors: Jens E. Pedersen, Steven Abreu, Matthias Jobst, Gregor Lenz, Vittorio
Fra, Felix C. Bauer, Dylan R. Muir, Peng Zhou, Bernhard Vogginger, Kade
Heckel, Gianvito Urgese, Sadasivan Shankar, Terrence C. Stewart, Jason K.
Eshraghian, Sadique Sheik
- Abstract summary: We establish a common reference-frame for computations in neuromorphic systems, dubbed the Neuromorphic Intermediate Representation (NIR)
NIR defines a set of computational primitives as idealized continuous-time hybrid systems that can be composed into graphs and mapped to and from various neuromorphic technology stacks.
We reproduce three NIR graphs across 7 neuromorphic simulators and 4 hardware platforms, demonstrating support for an unprecedented number of neuromorphic systems.
- Score: 4.150486998330532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks and neuromorphic hardware platforms that emulate
neural dynamics are slowly gaining momentum and entering main-stream usage.
Despite a well-established mathematical foundation for neural dynamics, the
implementation details vary greatly across different platforms.
Correspondingly, there are a plethora of software and hardware implementations
with their own unique technology stacks. Consequently, neuromorphic systems
typically diverge from the expected computational model, which challenges the
reproducibility and reliability across platforms. Additionally, most
neuromorphic hardware is limited by its access via a single software frameworks
with a limited set of training procedures. Here, we establish a common
reference-frame for computations in neuromorphic systems, dubbed the
Neuromorphic Intermediate Representation (NIR). NIR defines a set of
computational primitives as idealized continuous-time hybrid systems that can
be composed into graphs and mapped to and from various neuromorphic technology
stacks. By abstracting away assumptions around discretization and hardware
constraints, NIR faithfully captures the fundamental computation, while
simultaneously exposing the exact differences between the evaluated
implementation and the idealized mathematical formalism. We reproduce three NIR
graphs across 7 neuromorphic simulators and 4 hardware platforms, demonstrating
support for an unprecedented number of neuromorphic systems. With NIR, we
decouple the evolution of neuromorphic hardware and software, ultimately
increasing the interoperability between platforms and improving accessibility
to neuromorphic technologies. We believe that NIR is an important step towards
the continued study of brain-inspired hardware and bottom-up approaches aimed
at an improved understanding of the computational underpinnings of nervous
systems.
Related papers
- Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - NeuroBench: A Framework for Benchmarking Neuromorphic Computing
Algorithms and Systems [51.8066436083197]
NeuroBench is a benchmark framework for neuromorphic computing algorithms and systems.
NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia.
arXiv Detail & Related papers (2023-04-10T15:12:09Z) - Integration of Neuromorphic AI in Event-Driven Distributed Digitized
Systems: Concepts and Research Directions [0.2746383075956081]
We describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges.
We propose a microservice-based framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy.
We also present concepts that could serve as a basis for the realization of this framework.
arXiv Detail & Related papers (2022-10-20T12:09:29Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - The BrainScaleS-2 accelerated neuromorphic system with hybrid plasticity [0.0]
We describe the second generation of the BrainScaleS neuromorphic architecture, emphasizing applications enabled by this architecture.
It combines a custom accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.
arXiv Detail & Related papers (2022-01-26T17:13:46Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - The Backpropagation Algorithm Implemented on Spiking Neuromorphic
Hardware [4.3310896118860445]
We present a neuromorphic, spiking backpropagation algorithm based on pulse-gated dynamical information coordination and processing.
We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset.
arXiv Detail & Related papers (2021-06-13T15:56:40Z) - Bottom-up and top-down approaches for the design of neuromorphic
processing systems: Tradeoffs and synergies between natural and artificial
intelligence [3.874729481138221]
Moore's law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance.
One of these avenues is the exploration of alternative brain-inspired computing architectures that aim at achieving the flexibility and computational efficiency of biological neural processing systems.
We provide a comprehensive overview of the field, highlighting the different levels of granularity at which this paradigm shift is realized.
arXiv Detail & Related papers (2021-06-02T16:51:45Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.