Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural
Networks
- URL: http://arxiv.org/abs/2309.08232v1
- Date: Fri, 15 Sep 2023 08:02:29 GMT
- Title: Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural
Networks
- Authors: Murat Isik, Kayode Inadagbo
- Abstract summary: This paper presents an innovative methodology for improving the robustness and computational efficiency of Spiking Neural Networks (SNNs)
The proposed approach integrates astrocytes, a type of glial cell prevalent in the human brain, into SNNs, creating astrocyte-augmented networks.
Notably, our astrocyte-augmented SNN displays near-zero latency and theoretically infinite throughput, implying exceptional computational efficiency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an innovative methodology for improving the robustness
and computational efficiency of Spiking Neural Networks (SNNs), a critical
component in neuromorphic computing. The proposed approach integrates
astrocytes, a type of glial cell prevalent in the human brain, into SNNs,
creating astrocyte-augmented networks. To achieve this, we designed and
implemented an astrocyte model in two distinct platforms: CPU/GPU and FPGA. Our
FPGA implementation notably utilizes Dynamic Function Exchange (DFX)
technology, enabling real-time hardware reconfiguration and adaptive model
creation based on current operating conditions. The novel approach of
leveraging astrocytes significantly improves the fault tolerance of SNNs,
thereby enhancing their robustness. Notably, our astrocyte-augmented SNN
displays near-zero latency and theoretically infinite throughput, implying
exceptional computational efficiency. Through comprehensive comparative
analysis with prior works, it's established that our model surpasses others in
terms of neuron and synapse count while maintaining an efficient power
consumption profile. These results underscore the potential of our methodology
in shaping the future of neuromorphic computing, by providing robust and
energy-efficient systems.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Exploiting Heterogeneity in Timescales for Sparse Recurrent Spiking Neural Networks for Energy-Efficient Edge Computing [16.60622265961373]
Spiking Neural Networks (SNNs) represent the forefront of neuromorphic computing.
This paper weaves together three groundbreaking studies that revolutionize SNN performance.
arXiv Detail & Related papers (2024-07-08T23:33:12Z) - NeuroLGP-SM: Scalable Surrogate-Assisted Neuroevolution for Deep Neural Networks [0.0]
Evolutionary algorithms play a crucial role in the architectural configuration and training of Artificial Deep Neural Networks (DNNs)
In this work, we use phenotypic distance vectors, outputted from DNNs, alongside Kriging Partial Least Squares (KPLS) to make them suitable for search.
Our proposed approach, named Neuro-Linear Genetic Programming surrogate model (NeuroLGP-SM), efficiently and accurately estimates DNN fitness without the need for complete evaluations.
arXiv Detail & Related papers (2024-04-12T19:15:38Z) - Neuroevolving Electronic Dynamical Networks [0.0]
Neuroevolution is a method of applying an evolutionary algorithm to refine the performance of artificial neural networks through natural selection.
Fitness evaluation of continuous time recurrent neural networks (CTRNNs) can be time-consuming and computationally expensive.
Field programmable gate arrays (FPGAs) have emerged as an increasingly popular solution, due to their high performance and low power consumption.
arXiv Detail & Related papers (2024-04-06T10:54:35Z) - Understanding the Functional Roles of Modelling Components in Spiking Neural Networks [9.448298335007465]
Spiking neural networks (SNNs) are promising in achieving high computational efficiency with biological fidelity.
We investigate the functional roles of key modelling components, leakage, reset, and recurrence, in leaky integrate-and-fire (LIF) based SNNs.
Specifically, we find that the leakage plays a crucial role in balancing memory retention and robustness, the reset mechanism is essential for uninterrupted temporal processing and computational efficiency, and the recurrence enriches the capability to model complex dynamics at a cost of robustness degradation.
arXiv Detail & Related papers (2024-03-25T12:13:20Z) - Fully Spiking Denoising Diffusion Implicit Models [61.32076130121347]
Spiking neural networks (SNNs) have garnered considerable attention owing to their ability to run on neuromorphic devices with super-high speeds.
We propose a novel approach fully spiking denoising diffusion implicit model (FSDDIM) to construct a diffusion model within SNNs.
We demonstrate that the proposed method outperforms the state-of-the-art fully spiking generative model.
arXiv Detail & Related papers (2023-12-04T09:07:09Z) - Free-Space Optical Spiking Neural Network [0.0]
We introduce the Free-space Optical deep Spiking Convolutional Neural Network (OSCNN)
This novel approach draws inspiration from computational models of the human eye.
Our results demonstrate promising performance with minimal latency and power consumption compared to their electronic ONN counterparts.
arXiv Detail & Related papers (2023-11-08T09:41:14Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.