PeleNet: A Reservoir Computing Framework for Loihi
- URL: http://arxiv.org/abs/2011.12338v1
- Date: Tue, 24 Nov 2020 19:33:08 GMT
- Title: PeleNet: A Reservoir Computing Framework for Loihi
- Authors: Carlo Michaelis
- Abstract summary: PeleNet aims to simplify reservoir computing for the neuromorphic hardware Loihi.
It provides an automatic and efficient distribution of networks over several cores and chips.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-level frameworks for spiking neural networks are a key factor for fast
prototyping and efficient development of complex algorithms. Such frameworks
have emerged in the last years for traditional computers, but programming
neuromorphic hardware is still a challenge. Often low level programming with
knowledge about the hardware of the neuromorphic chip is required. The PeleNet
framework aims to simplify reservoir computing for the neuromorphic hardware
Loihi. It is build on top of the NxSDK from Intel and is written in Python. The
framework manages weight matrices, parameters and probes. In particular, it
provides an automatic and efficient distribution of networks over several cores
and chips. With this, the user is not confronted with technical details and can
concentrate on experiments.
Related papers
- NNTile: a machine learning framework capable of training extremely large GPT language models on a single node [83.9328245724548]
NNTile is based on a StarPU library, which implements task-based parallelism and schedules all provided tasks onto all available processing units.
It means that a particular operation, necessary to train a large neural network, can be performed on any of the CPU cores or GPU devices.
arXiv Detail & Related papers (2025-04-17T16:22:32Z) - Uncomputation in the Qrisp high-level Quantum Programming Framework [1.299941371793082]
We describe the interface for automated generation of uncomputation circuits in our Qrisp framework.
Our algorithm for synthesizing uncomputation circuits in Qrisp is based on an improved version of "Unqomp"
arXiv Detail & Related papers (2023-07-21T08:21:03Z) - The Basis of Design Tools for Quantum Computing: Arrays, Decision
Diagrams, Tensor Networks, and ZX-Calculus [55.58528469973086]
Quantum computers promise to efficiently solve important problems classical computers never will.
A fully automated quantum software stack needs to be developed.
This work provides a look "under the hood" of today's tools and showcases how these means are utilized in them, e.g., for simulation, compilation, and verification of quantum circuits.
arXiv Detail & Related papers (2023-01-10T19:00:00Z) - SOL: Reducing the Maintenance Overhead for Integrating Hardware Support
into AI Frameworks [0.7614628596146599]
AI frameworks such as Theano, Caffe, Chainer, CNTK, MxNet, PyTorch, DL4J provide a high level scripting API.
Less mainstream CPU, GPU or accelerator vendors need to put in a high effort to get their hardware supported by these frameworks.
NEC Laboratories Europe started developing the SOL AI Optimization project already years ago.
arXiv Detail & Related papers (2022-05-19T08:40:46Z) - NetKet 3: Machine Learning Toolbox for Many-Body Quantum Systems [1.0486135378491268]
NetKet is a machine learning toolbox for many-body quantum physics.
This new version is built on top of JAX, a differentiable programming and accelerated linear algebra framework.
The most significant new feature is the possibility to define arbitrary neural network ans"atze in pure Python code.
arXiv Detail & Related papers (2021-12-20T13:41:46Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Reservoir Stack Machines [77.12475691708838]
Memory-augmented neural networks equip a recurrent neural network with an explicit memory to support tasks that require information storage.
We introduce the reservoir stack machine, a model which can provably recognize all deterministic context-free languages.
Our results show that the reservoir stack machine achieves zero error, even on test sequences longer than the training data.
arXiv Detail & Related papers (2021-05-04T16:50:40Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - P-CRITICAL: A Reservoir Autoregulation Plasticity Rule for Neuromorphic
Hardware [4.416484585765027]
Backpropagation algorithms on recurrent artificial neural networks require an unfolding of accumulated states over time.
We propose a new local plasticity rule named P-CRITICAL designed for automatic reservoir tuning.
We observe an improved performance on tasks coming from various modalities without the need to tune parameters.
arXiv Detail & Related papers (2020-09-11T18:13:03Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Exposing Hardware Building Blocks to Machine Learning Frameworks [4.56877715768796]
We focus on how to design topologies that complement such a view of neurons as unique functions.
We develop a library that supports training a neural network with custom sparsity and quantization.
arXiv Detail & Related papers (2020-04-10T14:26:00Z) - Neural Network Compression Framework for fast model inference [59.65531492759006]
We present a new framework for neural networks compression with fine-tuning, which we called Neural Network Compression Framework (NNCF)
It leverages recent advances of various network compression methods and implements some of them, such as sparsity, quantization, and binarization.
The framework can be used within the training samples, which are supplied with it, or as a standalone package that can be seamlessly integrated into the existing training code.
arXiv Detail & Related papers (2020-02-20T11:24:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.