Astromorphic Self-Repair of Neuromorphic Hardware Systems
- URL: http://arxiv.org/abs/2209.07428v1
- Date: Thu, 15 Sep 2022 16:23:45 GMT
- Title: Astromorphic Self-Repair of Neuromorphic Hardware Systems
- Authors: Zhuangyu Han, Nafiul Islam, Abhronil Sengupta
- Abstract summary: This paper attempts to explore the self-repair role of glial cells, in particular, astrocytes.
Hardware-software co-design analysis reveals that bio-morphic astrocytic regulation has the potential to self-repair hardware realistic faults.
- Score: 0.8958368012475248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While neuromorphic computing architectures based on Spiking Neural Networks
(SNNs) are increasingly gaining interest as a pathway toward bio-plausible
machine learning, attention is still focused on computational units like the
neuron and synapse. Shifting from this neuro-synaptic perspective, this paper
attempts to explore the self-repair role of glial cells, in particular,
astrocytes. The work investigates stronger correlations with astrocyte
computational neuroscience models to develop macro-models with a higher degree
of bio-fidelity that accurately captures the dynamic behavior of the
self-repair process. Hardware-software co-design analysis reveals that
bio-morphic astrocytic regulation has the potential to self-repair hardware
realistic faults in neuromorphic hardware systems with significantly better
accuracy and repair convergence for unsupervised learning tasks on the MNIST
and F-MNIST datasets.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Delving Deeper Into Astromorphic Transformers [1.9775291915550175]
This paper seeks to delve deeper into various key aspects of neuron-synapse-astrocyte interactions to mimic self-attention mechanisms in Transformers.
The cross-layer perspective explored in this work involves bio-plausible modeling of Hebbian and pre-synaptic plasticities in neuron-astrocyte networks.
Our analysis on sentiment and image classification tasks on the IMDB and CIFAR10 datasets underscores the importance of constructing Astromorphic Transformers from both accuracy and learning speed improvement perspectives.
arXiv Detail & Related papers (2023-12-18T04:35:07Z) - Astrocyte-Enabled Advancements in Spiking Neural Networks for Large
Language Modeling [7.863029550014263]
Astrocyte-Modulated Spiking Neural Network (AstroSNN) exhibits exceptional performance in tasks involving memory retention and natural language generation.
AstroSNN shows low latency, high throughput, and reduced memory usage in practical applications.
arXiv Detail & Related papers (2023-12-12T06:56:31Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Increasing Liquid State Machine Performance with Edge-of-Chaos Dynamics
Organized by Astrocyte-modulated Plasticity [0.0]
Liquid state machine (LSM) tunes internal weights without backpropagation of gradients.
Recent findings suggest that astrocytes, a long-neglected non-neuronal brain cell, modulate synaptic plasticity and brain dynamics.
We propose the neuron-astrocyte liquid state machine (NALSM) that addresses under-performance through self-organized near-critical dynamics.
arXiv Detail & Related papers (2021-10-26T23:04:40Z) - Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity [0.0]
We produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro.
Our models were able to produce a level of network-wide synchrony.
The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
arXiv Detail & Related papers (2021-10-15T17:55:04Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - On the Self-Repair Role of Astrocytes in STDP Enabled Unsupervised SNNs [1.0009912692042526]
This work goes beyond the focus of current neuromorphic computing architectures on computational models for neuron and synapse.
We explore the role of glial cells in fault-tolerant capacity of Spiking Neural Networks trained in an unsupervised fashion using Spike-Timing Dependent Plasticity (STDP)
We characterize the degree of self-repair that can be enabled in such networks with varying degree of faults ranging from 50% - 90% and evaluate our proposal on the MNIST and Fashion-MNIST datasets.
arXiv Detail & Related papers (2020-09-08T01:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.