How Weight Resampling and Optimizers Shape the Dynamics of Continual Learning and Forgetting in Neural Networks
- URL: http://arxiv.org/abs/2507.01559v1
- Date: Wed, 02 Jul 2025 10:18:35 GMT
- Title: How Weight Resampling and Optimizers Shape the Dynamics of Continual Learning and Forgetting in Neural Networks
- Authors: Lapo Frati, Neil Traft, Jeff Clune, Nick Cheney,
- Abstract summary: Recent work in continual learning has highlighted the beneficial effect of resampling weights in the last layer of a neural network (zapping)<n>We investigate in detail the pattern of learning and forgetting that take place inside a convolutional neural network when trained in challenging settings.
- Score: 2.270857464465579
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent work in continual learning has highlighted the beneficial effect of resampling weights in the last layer of a neural network (``zapping"). Although empirical results demonstrate the effectiveness of this approach, the underlying mechanisms that drive these improvements remain unclear. In this work, we investigate in detail the pattern of learning and forgetting that take place inside a convolutional neural network when trained in challenging settings such as continual learning and few-shot transfer learning, with handwritten characters and natural images. Our experiments show that models that have undergone zapping during training more quickly recover from the shock of transferring to a new domain. Furthermore, to better observe the effect of continual learning in a multi-task setting we measure how each individual task is affected. This shows that, not only zapping, but the choice of optimizer can also deeply affect the dynamics of learning and forgetting, causing complex patterns of synergy/interference between tasks to emerge when the model learns sequentially at transfer time.
Related papers
- Long-Tailed Data Classification by Increasing and Decreasing Neurons During Training [4.32776344138537]
Real-world datasets often exhibit class imbalance situations where certain classes have far fewer samples than others.<n>We propose a method that periodically adds and removes neurons during training, thereby boosting representational power for minority classes.<n>Our results underscore the effectiveness of dynamic, biologically inspired network designs in improving performance on class-imbalanced data.
arXiv Detail & Related papers (2025-07-14T05:29:16Z) - The Importance of Being Lazy: Scaling Limits of Continual Learning [60.97756735877614]
We show that increasing model width is only beneficial when it reduces the amount of feature learning, yielding more laziness.<n>We study the intricate relationship between feature learning, task non-stationarity, and forgetting, finding that high feature learning is only beneficial with highly similar tasks.
arXiv Detail & Related papers (2025-06-20T10:12:38Z) - The Butterfly Effect: Neural Network Training Trajectories Are Highly Sensitive to Initial Conditions [51.68215326304272]
We show that even small perturbations reliably cause otherwise identical training trajectories to diverge-an effect that diminishes rapidly over training time.<n>Our findings provide insights into neural network training stability, with practical implications for fine-tuning, model merging, and diversity of model ensembles.
arXiv Detail & Related papers (2025-06-16T08:35:16Z) - Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks [9.82510084910641]
We propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts.
We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet.
arXiv Detail & Related papers (2023-11-02T22:00:23Z) - IF2Net: Innately Forgetting-Free Networks for Continual Learning [49.57495829364827]
Continual learning can incrementally absorb new concepts without interfering with previously learned knowledge.
Motivated by the characteristics of neural networks, we investigated how to design an Innately Forgetting-Free Network (IF2Net)
IF2Net allows a single network to inherently learn unlimited mapping rules without telling task identities at test time.
arXiv Detail & Related papers (2023-06-18T05:26:49Z) - Learning to Modulate Random Weights: Neuromodulation-inspired Neural
Networks For Efficient Continual Learning [1.9580473532948401]
We introduce a novel neural network architecture inspired by neuromodulation in biological nervous systems.
We show that this approach has strong learning performance per task despite the very small number of learnable parameters.
arXiv Detail & Related papers (2022-04-08T21:12:13Z) - Sparsity and Heterogeneous Dropout for Continual Learning in the Null
Space of Neural Activations [36.24028295650668]
Continual/lifelong learning from a non-stationary input data stream is a cornerstone of intelligence.
Deep neural networks are prone to forgetting their previously learned information upon learning new ones.
Overcoming catastrophic forgetting in deep neural networks has become an active field of research in recent years.
arXiv Detail & Related papers (2022-03-12T21:12:41Z) - Continual Learning Beyond a Single Model [28.130513524601145]
We show that employing ensemble models can be a simple yet effective method to improve continual performance.
We propose a computationally cheap algorithm with similar runtime to a single model yet enjoying the performance benefits of ensembles.
arXiv Detail & Related papers (2022-02-20T14:30:39Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Adaptive Reinforcement Learning through Evolving Self-Modifying Neural
Networks [0.0]
Current methods in Reinforcement Learning (RL) only adjust to new interactions after reflection over a specified time interval.
Recent work addressing this by endowing artificial neural networks with neuromodulated plasticity have been shown to improve performance on simple RL tasks trained using backpropagation.
Here we study the problem of meta-learning in a challenging quadruped domain, where each leg of the quadruped has a chance of becoming unusable.
Results demonstrate that agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient
arXiv Detail & Related papers (2020-05-22T02:24:44Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.