Cortical-inspired placement and routing: minimizing the memory resources
  in multi-core neuromorphic processors
        - URL: http://arxiv.org/abs/2208.13587v1
 - Date: Mon, 29 Aug 2022 13:28:02 GMT
 - Title: Cortical-inspired placement and routing: minimizing the memory resources
  in multi-core neuromorphic processors
 - Authors: Vanessa R. C. Leite, Zhe Su, Adrian M. Whatley, Giacomo Indiveri
 - Abstract summary: We propose a network design approach inspired by biological neural networks.
We use this approach to design a new routing scheme optimized for small-world networks.
We present a hardware-aware placement algorithm that optimize the allocation of resources for small-world network models.
 - Score: 5.391889175209394
 - License: http://creativecommons.org/licenses/by-nc-sa/4.0/
 - Abstract:   Brain-inspired event-based neuromorphic processing systems have emerged as a
promising technology in particular for bio-medical circuits and systems.
However, both neuromorphic and biological implementations of neural networks
have critical energy and memory constraints. To minimize the use of memory
resources in multi-core neuromorphic processors, we propose a network design
approach inspired by biological neural networks. We use this approach to design
a new routing scheme optimized for small-world networks and, at the same time,
to present a hardware-aware placement algorithm that optimizes the allocation
of resources for small-world network models. We validate the algorithm with a
canonical small-world network and present preliminary results for other
networks derived from it
 
       
      
        Related papers
        - Graph Neural Networks for Learning Equivariant Representations of Neural   Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv  Detail & Related papers  (2024-03-18T18:01:01Z) - Design and development of opto-neural processors for simulation of
  neural networks trained in image detection for potential implementation in
  hybrid robotics [0.0]
Living neural networks offer advantages of lower power consumption, faster processing, and biological realism.
This work proposes a simulated living neural network trained indirectly by backpropagating STDP based algorithms using precision activation by optogenetics.
arXiv  Detail & Related papers  (2024-01-17T04:42:49Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in   Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv  Detail & Related papers  (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv  Detail & Related papers  (2022-10-06T13:04:45Z) - A hardware-software co-design approach to minimize the use of memory
  resources in multi-core neuromorphic processors [5.391889175209394]
We propose a hardware-software co-design approach for minimizing the use of memory resources in multi-core neuromorphic processors.
We use this approach to design new routing schemes optimized for small-world networks and to provide guidelines for designing novel application-specific neuromorphic chips.
arXiv  Detail & Related papers  (2022-03-01T17:59:55Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
  Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv  Detail & Related papers  (2022-01-19T09:26:34Z) - Max and Coincidence Neurons in Neural Networks [0.07614628596146598]
We optimize networks containing models of the max and coincidence neurons using neural architecture search.
We analyze the structure, operations, and neurons of optimized networks to develop a signal-processing ResNet.
The developed network achieves an average of 2% improvement in accuracy and a 25% improvement in network size across a variety of datasets.
arXiv  Detail & Related papers  (2021-10-04T07:13:50Z) - An SMT-Based Approach for Verifying Binarized Neural Networks [1.4394939014120451]
We propose an SMT-based technique for verifying Binarized Neural Networks.
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
arXiv  Detail & Related papers  (2020-11-05T16:21:26Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv  Detail & Related papers  (2020-08-25T15:48:15Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
  Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv  Detail & Related papers  (2020-05-04T13:24:00Z) - Structural plasticity on an accelerated analog neuromorphic hardware
  system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv  Detail & Related papers  (2019-12-27T10:15:58Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.