Bio-realistic Neural Network Implementation on Loihi 2 with Izhikevich
Neurons
- URL: http://arxiv.org/abs/2307.11844v2
- Date: Fri, 28 Jul 2023 21:23:08 GMT
- Title: Bio-realistic Neural Network Implementation on Loihi 2 with Izhikevich
Neurons
- Authors: Recep Bu\u{g}ra Uluda\u{g} and Serhat \c{C}a\u{g}da\c{s} and Yavuz
Selim \.I\c{s}ler and Neslihan Serap \c{S}eng\"or and Ismail Akturk
- Abstract summary: We presented a bio-realistic basal ganglia neural network and its integration into Intel's Loihi neuromorphic processor to perform simple Go/No-Go task.
We used Izhikevich neuron model, implemented as microcode, instead of Leaky-Integrate and Fire neuron model that has built-in support on Loihi.
- Score: 0.10499611180329801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we presented a bio-realistic basal ganglia neural network and
its integration into Intel's Loihi neuromorphic processor to perform simple
Go/No-Go task. To incorporate more bio-realistic and diverse set of neuron
dynamics, we used Izhikevich neuron model, implemented as microcode, instead of
Leaky-Integrate and Fire (LIF) neuron model that has built-in support on Loihi.
This work aims to demonstrate the feasibility of implementing computationally
efficient custom neuron models on Loihi for building spiking neural networks
(SNNs) that features these custom neurons to realize bio-realistic neural
networks.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Astrocyte-Enabled Advancements in Spiking Neural Networks for Large
Language Modeling [7.863029550014263]
Astrocyte-Modulated Spiking Neural Network (AstroSNN) exhibits exceptional performance in tasks involving memory retention and natural language generation.
AstroSNN shows low latency, high throughput, and reduced memory usage in practical applications.
arXiv Detail & Related papers (2023-12-12T06:56:31Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Multi-compartment Neuron and Population Encoding improved Spiking Neural
Network for Deep Distributional Reinforcement Learning [3.036382664997076]
Spiking neural networks (SNNs) exhibit significant low energy consumption and are more suitable for incorporating multi-scale biological characteristics.
In this paper, we propose a brain-inspired SNN-based deep distributional reinforcement learning algorithm with combination of bio-inspired multi-compartment neuron (MCN) model and population coding method.
arXiv Detail & Related papers (2023-01-18T02:45:38Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - SIT: A Bionic and Non-Linear Neuron for Spiking Neural Network [12.237928453571636]
Spiking Neural Networks (SNNs) have piqued researchers' interest because of their capacity to process temporal information and low power consumption.
Current state-of-the-art methods limited their biological plausibility and performance because their neurons are generally built on the simple Leaky-Integrate-and-Fire (LIF) model.
Due to the high level of dynamic complexity, modern neuron models have seldom been implemented in SNN practice.
arXiv Detail & Related papers (2022-03-30T07:50:44Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.