A hardware-software co-design approach to minimize the use of memory
resources in multi-core neuromorphic processors
- URL: http://arxiv.org/abs/2203.00655v1
- Date: Tue, 1 Mar 2022 17:59:55 GMT
- Title: A hardware-software co-design approach to minimize the use of memory
resources in multi-core neuromorphic processors
- Authors: Vanessa R. C. Leite, Zhe Su, Adrian M. Whatley, Giacomo Indiveri
- Abstract summary: We propose a hardware-software co-design approach for minimizing the use of memory resources in multi-core neuromorphic processors.
We use this approach to design new routing schemes optimized for small-world networks and to provide guidelines for designing novel application-specific neuromorphic chips.
- Score: 5.391889175209394
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Both in electronics and biology, physical implementations of neural networks
have severe energy and memory constraints. We propose a hardware-software
co-design approach for minimizing the use of memory resources in multi-core
neuromorphic processors, by taking inspiration from biological neural networks.
We use this approach to design new routing schemes optimized for small-world
networks and to provide guidelines for designing novel application-specific
multi-core neuromorphic chips. Starting from the hierarchical routing scheme
proposed, we present a hardware-aware placement algorithm that optimizes the
allocation of resources for arbitrary network models. We validate the algorithm
with a canonical small-world network and present preliminary results for other
networks derived from it.
Related papers
- Core Placement Optimization of Many-core Brain-Inspired Near-Storage Systems for Spiking Neural Network Training [21.75341703605822]
We propose a SNN training many-core deployment optimization method based on Off-policy Deterministic Actor-Critic.
We update the parameters of the policy network through near-end policy optimization to achieve deployment optimization of SNN models in the many-core near-memory computing architecture.
Our method overcomes the problems such as uneven computation and storage loads between cores, and the formation of local communication hotspots.
arXiv Detail & Related papers (2024-11-29T01:46:30Z) - Energy-Aware FPGA Implementation of Spiking Neural Network with LIF Neurons [0.5243460995467893]
Spiking Neural Networks (SNNs) stand out as a cutting-edge solution for TinyML.
This paper presents a novel SNN architecture based on the 1st Order Leaky Integrate-and-Fire (LIF) neuron model.
A hardware-friendly LIF design is also proposed, and implemented on a Xilinx Artix-7 FPGA.
arXiv Detail & Related papers (2024-11-03T16:42:10Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Recent Advances in Scalable Energy-Efficient and Trustworthy Spiking
Neural networks: from Algorithms to Technology [11.479629320025673]
spiking neural networks (SNNs) have become an attractive alternative to deep neural networks for a broad range of signal processing applications.
We describe advances in algorithmic and optimization innovations to efficiently train and scale low-latency, and energy-efficient SNNs.
We discuss the potential path forward for research in building deployable SNN systems.
arXiv Detail & Related papers (2023-12-02T19:47:00Z) - Cortical-inspired placement and routing: minimizing the memory resources
in multi-core neuromorphic processors [5.391889175209394]
We propose a network design approach inspired by biological neural networks.
We use this approach to design a new routing scheme optimized for small-world networks.
We present a hardware-aware placement algorithm that optimize the allocation of resources for small-world network models.
arXiv Detail & Related papers (2022-08-29T13:28:02Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - Communication-Efficient Separable Neural Network for Distributed
Inference on Edge Devices [2.28438857884398]
We propose a novel method of exploiting model parallelism to separate a neural network for distributed inferences.
Under proper specifications of devices and configurations of models, our experiments show that the inference of large neural networks on edge clusters can be distributed and accelerated.
arXiv Detail & Related papers (2021-11-03T19:30:28Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Real-time Multi-Task Diffractive Deep Neural Networks via
Hardware-Software Co-design [1.6066483376871004]
This work proposes a novel hardware-software co-design method that enables robust and noise-resilient Multi-task Learning in D$2$NNs.
Our experimental results demonstrate significant improvements in versatility and hardware efficiency, and also demonstrate the robustness of proposed multi-task D$2$NN architecture.
arXiv Detail & Related papers (2020-12-16T12:29:54Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.