Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models
- URL: http://arxiv.org/abs/2409.11263v1
- Date: Tue, 17 Sep 2024 15:11:39 GMT
- Title: Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models
- Authors: Jiahao Qin,
- Abstract summary: Bio-Inspired Mamba is a novel online learning framework for selective state space models that integrates biological learning principles with the Mamba architecture.
BIM combines Real-Time Recurrent Learning (RTRL) with Spike-Timing-Dependent Plasticity (STDP)-like local learning rules, addressing the challenges of temporal locality and biological plausibility in training spiking neural networks.
We evaluate BIM on language modeling, speech recognition, and biomedical signal analysis tasks, demonstrating competitive performance against traditional methods while adhering to biological learning principles.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces Bio-Inspired Mamba (BIM), a novel online learning framework for selective state space models that integrates biological learning principles with the Mamba architecture. BIM combines Real-Time Recurrent Learning (RTRL) with Spike-Timing-Dependent Plasticity (STDP)-like local learning rules, addressing the challenges of temporal locality and biological plausibility in training spiking neural networks. Our approach leverages the inherent connection between backpropagation through time and STDP, offering a computationally efficient alternative that maintains the ability to capture long-range dependencies. We evaluate BIM on language modeling, speech recognition, and biomedical signal analysis tasks, demonstrating competitive performance against traditional methods while adhering to biological learning principles. Results show improved energy efficiency and potential for neuromorphic hardware implementation. BIM not only advances the field of biologically plausible machine learning but also provides insights into the mechanisms of temporal information processing in biological neural networks.
Related papers
- Memory Networks: Towards Fully Biologically Plausible Learning [2.7013801448234367]
Current artificial neural networks rely on techniques like backpropagation and weight sharing, which do not align with the brain's natural information processing methods.
We propose the Memory Network, a model inspired by biological principles that avoids backpropagation and convolutions, and operates in a single pass.
arXiv Detail & Related papers (2024-09-18T06:01:35Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Spatio-temporal Structure of Excitation and Inhibition Emerges in Spiking Neural Networks with and without Biologically Plausible Constraints [0.06752396542927405]
We present a Spiking Neural Network (SNN) model that incorporates learnable synaptic delays.
We implement a dynamic pruning strategy that combines DEEP R for connection removal and RigL for connection.
We observed that the reintroduction-temporal patterns of excitation and inhibition appeared in the more biologically plausible model as well.
arXiv Detail & Related papers (2024-07-07T11:55:48Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Spiking Neural Networks and Bio-Inspired Supervised Deep Learning: A
Survey [9.284385189718236]
Bio-Inspired Deep Learning approaches towards advancing the computational capabilities and biological plausibility of current models.
Recent bio-inspired training methods pose themselves as alternatives to backprop, both for traditional and spiking networks.
arXiv Detail & Related papers (2023-07-30T13:57:25Z) - TS-MoCo: Time-Series Momentum Contrast for Self-Supervised Physiological
Representation Learning [8.129782272731397]
We propose a novel encoding framework that relies on self-supervised learning with momentum contrast to learn representations from various physiological domains without needing labels.
We show that our self-supervised learning approach can indeed learn discriminative features which can be exploited in downstream classification tasks.
arXiv Detail & Related papers (2023-06-10T21:17:42Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - BioLeaF: A Bio-plausible Learning Framework for Training of Spiking
Neural Networks [4.698975219970009]
We propose a new bio-plausible learning framework consisting of two components: a new architecture, and its supporting learning rules.
Under our microcircuit architecture, we employ the Spike-Timing-Dependent-Plasticity (STDP) rule operating in local compartments to update synaptic weights.
Our experiments show that the proposed framework demonstrates learning accuracy comparable to BP-based rules.
arXiv Detail & Related papers (2021-11-14T10:32:22Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.