Bayesian Metaplasticity from Synaptic Uncertainty
- URL: http://arxiv.org/abs/2312.10153v1
- Date: Fri, 15 Dec 2023 19:06:10 GMT
- Title: Bayesian Metaplasticity from Synaptic Uncertainty
- Authors: Djohan Bonnet, Tifenn Hirtzlin, Tarcisius Januel, Thomas Dalgaty,
Damien Querlioz, Elisa Vianello
- Abstract summary: We introduce MEtaplasticity from Synaptic Uncertainty (MESU), inspired by metaplasticity and Bayesian inference principles.
MESU harnesses synaptic uncertainty to retain information over time, with its update rule closely approximating the diagonal Newton's method for synaptic updates.
We demonstrate MESU's remarkable capability to maintain learning performance across 100 tasks without the need of explicit task boundaries.
- Score: 0.9786690381850356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Catastrophic forgetting remains a challenge for neural networks, especially
in lifelong learning scenarios. In this study, we introduce MEtaplasticity from
Synaptic Uncertainty (MESU), inspired by metaplasticity and Bayesian inference
principles. MESU harnesses synaptic uncertainty to retain information over
time, with its update rule closely approximating the diagonal Newton's method
for synaptic updates. Through continual learning experiments on permuted MNIST
tasks, we demonstrate MESU's remarkable capability to maintain learning
performance across 100 tasks without the need of explicit task boundaries.
Related papers
- Semi-parametric Memory Consolidation: Towards Brain-like Deep Continual Learning [59.35015431695172]
We propose a novel biomimetic continual learning framework that integrates semi-parametric memory and the wake-sleep consolidation mechanism.
For the first time, our method enables deep neural networks to retain high performance on novel tasks while maintaining prior knowledge in real-world challenging continual learning scenarios.
arXiv Detail & Related papers (2025-04-20T19:53:13Z) - Bayesian continual learning and forgetting in neural networks [0.8795040582681392]
We introduce Metaplasticity from Synaptic Uncertainty (MESU)
MESU is a Bayesian framework that updates network parameters according their uncertainty.
Our results unify ideas from metaplasticity, Bayesian inference, and Hessian-based regularization.
arXiv Detail & Related papers (2025-04-18T09:11:34Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - TACOS: Task Agnostic Continual Learning in Spiking Neural Networks [1.703671463296347]
Catastrophic interference, the loss of previously learned information when learning new information, remains a major challenge in machine learning.
We show that neuro-inspired mechanisms such as synaptic consolidation and metaplasticity can mitigate catastrophic interference in a spiking neural network.
Our model, TACOS, combines neuromodulation with complex synaptic dynamics to enable new learning while protecting previous information.
arXiv Detail & Related papers (2024-08-16T15:42:16Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - IF2Net: Innately Forgetting-Free Networks for Continual Learning [49.57495829364827]
Continual learning can incrementally absorb new concepts without interfering with previously learned knowledge.
Motivated by the characteristics of neural networks, we investigated how to design an Innately Forgetting-Free Network (IF2Net)
IF2Net allows a single network to inherently learn unlimited mapping rules without telling task identities at test time.
arXiv Detail & Related papers (2023-06-18T05:26:49Z) - Bayesian Continual Learning via Spiking Neural Networks [38.518936229794214]
We take steps towards the design of neuromorphic systems that are capable of adaptation to changing learning tasks.
We derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework.
We instantiate the proposed approach for both real-valued and binary synaptic weights.
arXiv Detail & Related papers (2022-08-29T17:11:14Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Sparsity and Heterogeneous Dropout for Continual Learning in the Null
Space of Neural Activations [36.24028295650668]
Continual/lifelong learning from a non-stationary input data stream is a cornerstone of intelligence.
Deep neural networks are prone to forgetting their previously learned information upon learning new ones.
Overcoming catastrophic forgetting in deep neural networks has become an active field of research in recent years.
arXiv Detail & Related papers (2022-03-12T21:12:41Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - SpikePropamine: Differentiable Plasticity in Spiking Neural Networks [0.0]
We introduce a framework for learning the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in Spiking Neural Networks (SNNs)
We show that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks.
These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task.
arXiv Detail & Related papers (2021-06-04T19:29:07Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Enabling Continual Learning with Differentiable Hebbian Plasticity [18.12749708143404]
Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.
catastrophic forgetting poses a grand challenge for neural networks performing such learning process.
We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity.
arXiv Detail & Related papers (2020-06-30T06:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.