Context selectivity with dynamic availability enables lifelong continual
learning
- URL: http://arxiv.org/abs/2306.01690v2
- Date: Thu, 25 Jan 2024 12:41:44 GMT
- Title: Context selectivity with dynamic availability enables lifelong continual
learning
- Authors: Martin Barry, Wulfram Gerstner, Guillaume Bellec
- Abstract summary: The brain is able to learn complex skills, stop the practice for years, learn other skills in between, and still retrieve the original knowledge when necessary.
We suggest a bio-plausible meta-plasticity rule building on classical work in lifelong learning.
We show that neuron selectivity and neuron-wide consolidation is a simple and viable meta-plasticity hypothesis to enable CL in the brain.
- Score: 8.828725115733986
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: "You never forget how to ride a bike", -- but how is that possible? The brain
is able to learn complex skills, stop the practice for years, learn other
skills in between, and still retrieve the original knowledge when necessary.
The mechanisms of this capability, referred to as lifelong learning (or
continual learning, CL), are unknown. We suggest a bio-plausible
meta-plasticity rule building on classical work in CL which we summarize in two
principles: (i) neurons are context selective, and (ii) a local availability
variable partially freezes the plasticity if the neuron was relevant for
previous tasks. In a new neuro-centric formalization of these principles, we
suggest that neuron selectivity and neuron-wide consolidation is a simple and
viable meta-plasticity hypothesis to enable CL in the brain. In simulation,
this simple model balances forgetting and consolidation leading to better
transfer learning than contemporary CL algorithms on image recognition and
natural language processing CL benchmarks.
Related papers
- Semi-parametric Memory Consolidation: Towards Brain-like Deep Continual Learning [59.35015431695172]
We propose a novel biomimetic continual learning framework that integrates semi-parametric memory and the wake-sleep consolidation mechanism.
For the first time, our method enables deep neural networks to retain high performance on novel tasks while maintaining prior knowledge in real-world challenging continual learning scenarios.
arXiv Detail & Related papers (2025-04-20T19:53:13Z) - Hybrid Learners Do Not Forget: A Brain-Inspired Neuro-Symbolic Approach to Continual Learning [20.206972068340843]
Continual learning is crucial for creating AI agents that can learn and improve themselves autonomously.
Inspired by the two distinct systems in the human brain, we propose a Neuro-Symbolic Brain-Inspired Continual Learning framework.
arXiv Detail & Related papers (2025-03-16T20:09:19Z) - Neural Learning Rules from Associative Networks Theory [0.0]
Associative networks theory is providing tools to interpret update rules of artificial neural networks.
deriving neural learning rules from a solid theory remains a fundamental challenge.
arXiv Detail & Related papers (2025-03-11T11:44:04Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration [7.26259898628108]
The present work aims at proving mathematically that a neural network inspired by biology can learn a classification task thanks to local transformations only.
We propose a spiking neural network named CHANI, whose neurons activity is modeled by Hawkes processes.
arXiv Detail & Related papers (2024-05-29T07:17:58Z) - Curriculum effects and compositionality emerge with in-context learning in neural networks [15.744573869783972]
We show that networks capable of "in-context learning" (ICL) can reproduce human-like learning and compositional behavior on rule-governed tasks.
Our work shows how emergent ICL can equip neural networks with fundamentally different learning properties than those traditionally attributed to them.
arXiv Detail & Related papers (2024-02-13T18:55:27Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Flexible Phase Dynamics for Bio-Plausible Contrastive Learning [11.179324762680297]
Contrastive Learning (CL) algorithms are traditionally implemented with rigid, temporally non-local, and periodic learning dynamics.
This study builds on work exploring how CL might be implemented by biological or neurmorphic systems.
We show that this form of learning can be made temporally local, and can still function even if many of the dynamical requirements of standard training procedures are relaxed.
arXiv Detail & Related papers (2023-02-24T03:18:45Z) - Bio-Inspired, Task-Free Continual Learning through Activity
Regularization [3.5502600490147196]
Continual learning approaches usually require discrete task boundaries.
We take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent forgetting.
In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations.
Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries.
arXiv Detail & Related papers (2022-12-08T15:14:20Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Adaptive Rational Activations to Boost Deep Reinforcement Learning [68.10769262901003]
We motivate why rationals are suitable for adaptable activation functions and why their inclusion into neural networks is crucial.
We demonstrate that equipping popular algorithms with (recurrent-)rational activations leads to consistent improvements on Atari games.
arXiv Detail & Related papers (2021-02-18T14:53:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.