Do Neural Networks Lose Plasticity in a Gradually Changing World?
- URL: http://arxiv.org/abs/2602.09234v1
- Date: Mon, 09 Feb 2026 22:01:50 GMT
- Title: Do Neural Networks Lose Plasticity in a Gradually Changing World?
- Authors: Tianhui Liu, Lili Mou,
- Abstract summary: Loss of plasticity refers to neural networks gradually losing the ability to learn new tasks.<n>We investigate a gradually changing environment and simulate this by input/output and task sampling.<n>We show that the loss of plasticity is an artifact of abrupt tasks changes in the environment and can be largely mitigated if the world changes gradually.
- Score: 21.869234048975073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning has become a trending topic in machine learning. Recent studies have discovered an interesting phenomenon called loss of plasticity, referring to neural networks gradually losing the ability to learn new tasks. However, existing plasticity research largely relies on contrived settings with abrupt task transitions, which often do not reflect real-world environments. In this paper, we propose to investigate a gradually changing environment, and we simulate this by input/output interpolation and task sampling. We perform theoretical and empirical analysis, showing that the loss of plasticity is an artifact of abrupt tasks changes in the environment and can be largely mitigated if the world changes gradually.
Related papers
- New Evidence of the Two-Phase Learning Dynamics of Neural Networks [59.55028392232715]
We introduce an interval-wise perspective that compares network states across a time window.<n>We show that the response of the network to a perturbation exhibits a transition from chaotic to stable.<n>We also find that after this transition point the model's functional trajectory is confined to a narrow cone-shaped subset.
arXiv Detail & Related papers (2025-05-20T04:03:52Z) - Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning [122.67854581396578]
Plasticine is an open-source framework for benchmarking plasticity optimization in deep reinforcement learning.<n>Plasticine provides single-file implementations of over 13 mitigation methods, 10 evaluation metrics, and learning scenarios.
arXiv Detail & Related papers (2025-04-24T12:32:13Z) - Plasticity Loss in Deep Reinforcement Learning: A Survey [15.525552360867367]
plasticity is crucial for deep Reinforcement Learning (RL) agents.
Once plasticity is lost, an agent's performance will plateau because it cannot improve its policy to account for changes in the data distribution.
Loss of plasticity can be connected to many other issues plaguing deep RL, such as training instabilities, scaling failures, overestimation bias, and insufficient exploration.
arXiv Detail & Related papers (2024-11-07T16:13:54Z) - Neural Network Plasticity and Loss Sharpness [0.0]
Recent findings indicate that plasticity loss on new tasks is highly related to loss landscape sharpness in non-stationary RL frameworks.
We explore the usage of sharpness regularization techniques, which seek out smooth minima and have been touted for their generalization capabilities in vanilla prediction settings.
arXiv Detail & Related papers (2024-09-25T19:20:09Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Loss of Plasticity in Continual Deep Reinforcement Learning [14.475963928766134]
We demonstrate that deep RL agents lose their ability to learn good policies when they cycle through a sequence of Atari 2600 games.
We investigate this phenomenon closely at scale and analyze how the weights, gradients, and activations change over time.
Our analysis shows that the activation footprint of the network becomes sparser, contributing to the diminishing gradients.
arXiv Detail & Related papers (2023-03-13T22:37:15Z) - Understanding plasticity in neural networks [41.79540750236036]
Plasticity is the ability of a neural network to quickly change its predictions in response to new information.
Deep neural networks are known to lose plasticity over the course of training even in relatively simple learning problems.
arXiv Detail & Related papers (2023-03-02T18:47:51Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Ecological Reinforcement Learning [76.9893572776141]
We study the kinds of environment properties that can make learning under such conditions easier.
understanding how properties of the environment impact the performance of reinforcement learning agents can help us to structure our tasks in ways that make learning tractable.
arXiv Detail & Related papers (2020-06-22T17:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.