Elephant Neural Networks: Born to Be a Continual Learner
- URL: http://arxiv.org/abs/2310.01365v1
- Date: Mon, 2 Oct 2023 17:27:39 GMT
- Title: Elephant Neural Networks: Born to Be a Continual Learner
- Authors: Qingfeng Lan, A. Rupam Mahmood
- Abstract summary: Catastrophic forgetting remains a significant challenge to continual learning for decades.
We study the role of activation functions in the training dynamics of neural networks and their impact on catastrophic forgetting.
We show that by simply replacing classical activation functions with elephant activation functions, we can significantly improve the resilience of neural networks to catastrophic forgetting.
- Score: 7.210328077827388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Catastrophic forgetting remains a significant challenge to continual learning
for decades. While recent works have proposed effective methods to mitigate
this problem, they mainly focus on the algorithmic side. Meanwhile, we do not
fully understand what architectural properties of neural networks lead to
catastrophic forgetting. This study aims to fill this gap by studying the role
of activation functions in the training dynamics of neural networks and their
impact on catastrophic forgetting. Our study reveals that, besides sparse
representations, the gradient sparsity of activation functions also plays an
important role in reducing forgetting. Based on this insight, we propose a new
class of activation functions, elephant activation functions, that can generate
both sparse representations and sparse gradients. We show that by simply
replacing classical activation functions with elephant activation functions, we
can significantly improve the resilience of neural networks to catastrophic
forgetting. Our method has broad applicability and benefits for continual
learning in regression, class incremental learning, and reinforcement learning
tasks. Specifically, we achieves excellent performance on Split MNIST dataset
in just one single pass, without using replay buffer, task boundary
information, or pre-training.
Related papers
- Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Evaluating CNN with Oscillatory Activation Function [0.0]
CNNs capability to learn high-dimensional complex features from the images is the non-linearity introduced by the activation function.
This paper explores the performance of one of the CNN architecture ALexNet on MNIST and CIFAR10 datasets using oscillating activation function (GCU) and some other commonly used activation functions like ReLu, PReLu, and Mish.
arXiv Detail & Related papers (2022-11-13T11:17:13Z) - Energy-based Latent Aligner for Incremental Learning [83.0135278697976]
Deep learning models tend to forget their earlier knowledge while incrementally learning new tasks.
This behavior emerges because the parameter updates optimized for the new tasks may not align well with the updates suitable for older tasks.
We propose ELI: Energy-based Latent Aligner for Incremental Learning.
arXiv Detail & Related papers (2022-03-28T17:57:25Z) - Sparsity and Heterogeneous Dropout for Continual Learning in the Null
Space of Neural Activations [36.24028295650668]
Continual/lifelong learning from a non-stationary input data stream is a cornerstone of intelligence.
Deep neural networks are prone to forgetting their previously learned information upon learning new ones.
Overcoming catastrophic forgetting in deep neural networks has become an active field of research in recent years.
arXiv Detail & Related papers (2022-03-12T21:12:41Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Advantages of biologically-inspired adaptive neural activation in RNNs
during learning [10.357949759642816]
We introduce a novel parametric family of nonlinear activation functions inspired by input-frequency response curves of biological neurons.
We find that activation adaptation provides distinct task-specific solutions and in some cases, improves both learning speed and performance.
arXiv Detail & Related papers (2020-06-22T13:49:52Z) - Untangling tradeoffs between recurrence and self-attention in neural
networks [81.30894993852813]
We present a formal analysis of how self-attention affects gradient propagation in recurrent networks.
We prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies.
We propose a relevancy screening mechanism that allows for a scalable use of sparse self-attention with recurrence.
arXiv Detail & Related papers (2020-06-16T19:24:25Z) - Understanding the Role of Training Regimes in Continual Learning [51.32945003239048]
Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially.
We study the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks' local minima.
arXiv Detail & Related papers (2020-06-12T06:00:27Z) - A survey on modern trainable activation functions [0.0]
We propose a taxonomy of trainable activation functions and highlight common and distinctive proprieties of recent and past models.
We show that many of the proposed approaches are equivalent to adding neuron layers which use fixed (non-trainable) activation functions.
arXiv Detail & Related papers (2020-05-02T12:38:43Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.