Sequence Learning and Consolidation on Loihi using On-chip Plasticity
- URL: http://arxiv.org/abs/2205.00643v1
- Date: Mon, 2 May 2022 04:18:50 GMT
- Title: Sequence Learning and Consolidation on Loihi using On-chip Plasticity
- Authors: Jack Lindsey, James B Aimone
- Abstract summary: We develop a model of predictive learning on neuromorphic hardware using the on-chip plasticity capabilities of the Loihi chip.
Our model serves as a proof-of-concept that online predictive learning models can be deployed on neuromorphic hardware with on-chip plasticity.
- Score: 6.9597705368779925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we develop a model of predictive learning on neuromorphic
hardware. Our model uses the on-chip plasticity capabilities of the Loihi chip
to remember observed sequences of events and use this memory to generate
predictions of future events in real time. Given the locality constraints of
on-chip plasticity rules, generating predictions without interfering with the
ongoing learning process is nontrivial. We address this challenge with a memory
consolidation approach inspired by hippocampal replay. Sequence memory is
stored in an initial memory module using spike-timing dependent plasticity.
Later, during an offline period, memories are consolidated into a distinct
prediction module. This second module is then able to represent predicted
future events without interfering with the activity, and plasticity, in the
first module, enabling online comparison between predictions and ground-truth
observations. Our model serves as a proof-of-concept that online predictive
learning models can be deployed on neuromorphic hardware with on-chip
plasticity.
Related papers
- Predictive Attractor Models [9.947717243638289]
We propose textitPredictive Attractor Models (PAM), a novel sequence memory architecture with desirable generative properties.
PAM avoids catastrophic forgetting by uniquely representing past context through lateral inhibition in cortical minicolumns.
We show that PAM is trained with local computations through Hebbian plasticity rules in a biologically plausible framework.
arXiv Detail & Related papers (2024-10-03T12:25:01Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - Humanoid Locomotion as Next Token Prediction [84.21335675130021]
Our model is a causal transformer trained via autoregressive prediction of sensorimotor trajectories.
We show that our model enables a full-sized humanoid to walk in San Francisco zero-shot.
Our model can transfer to the real world even when trained on only 27 hours of walking data, and can generalize commands not seen during training like walking backward.
arXiv Detail & Related papers (2024-02-29T18:57:37Z) - Memory-and-Anticipation Transformer for Online Action Understanding [52.24561192781971]
We propose a novel memory-anticipation-based paradigm to model an entire temporal structure, including the past, present, and future.
We present Memory-and-Anticipation Transformer (MAT), a memory-anticipation-based approach, to address the online action detection and anticipation tasks.
arXiv Detail & Related papers (2023-08-15T17:34:54Z) - Sequential Memory with Temporal Predictive Coding [6.228559238589584]
We propose a PC-based model for emphsequential memory, called emphtemporal predictive coding (tPC)
We show that our tPC models can memorize and retrieve sequential inputs accurately with a biologically plausible neural implementation.
arXiv Detail & Related papers (2023-05-19T20:03:31Z) - Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid
Learning in RNNs [13.250455334302288]
Evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning.
We equip Recurrent Neural Networks with plasticity rules to enable them to adapt their parameters according to ongoing experiences.
Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories.
arXiv Detail & Related papers (2023-02-07T03:42:42Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - GAN Memory with No Forgetting [71.59992224279651]
We propose a GAN memory for lifelong learning, which is capable of remembering a stream of datasets via generative processes.
Our GAN memory is based on recognizing that one can modulate the "style" of a GAN model to form perceptually-distant targeted generation.
arXiv Detail & Related papers (2020-06-13T03:19:54Z) - MANTRA: Memory Augmented Networks for Multiple Trajectory Prediction [26.151761714896118]
We address the problem of multimodal trajectory prediction exploiting a Memory Augmented Neural Network.
Our method learns past and future trajectory embeddings using recurrent neural networks and exploits an associative external memory to store and retrieve such embeddings.
Trajectory prediction is then performed by decoding in-memory future encodings conditioned with the observed past.
arXiv Detail & Related papers (2020-06-05T09:49:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.