Learning to Continually Learn Rapidly from Few and Noisy Data
- URL: http://arxiv.org/abs/2103.04066v1
- Date: Sat, 6 Mar 2021 08:29:47 GMT
- Title: Learning to Continually Learn Rapidly from Few and Noisy Data
- Authors: Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian
Walder, Gabriela Ferraro, and Hanna Suominen
- Abstract summary: Continual learning could be achieved via replay -- by concurrently training externally stored old data while learning a new task.
By employing a meta-learner, which textitlearns a learning rate per parameter per past task, we found that base learners produced strong results when less memory was available.
- Score: 19.09933805011466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks suffer from catastrophic forgetting and are unable to
sequentially learn new tasks without guaranteed stationarity in data
distribution. Continual learning could be achieved via replay -- by
concurrently training externally stored old data while learning a new task.
However, replay becomes less effective when each past task is allocated with
less memory. To overcome this difficulty, we supplemented replay mechanics with
meta-learning for rapid knowledge acquisition. By employing a meta-learner,
which \textit{learns a learning rate per parameter per past task}, we found
that base learners produced strong results when less memory was available.
Additionally, our approach inherited several meta-learning advantages for
continual learning: it demonstrated strong robustness to continually learn
under the presence of noises and yielded base learners to higher accuracy in
less updates.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Continual Learning via Manifold Expansion Replay [36.27348867557826]
Catastrophic forgetting is a major challenge to continual learning.
We propose a novel replay strategy called Replay Manifold Expansion (MaER)
We show that the proposed method significantly improves the accuracy in continual learning setup, outperforming the state of the arts.
arXiv Detail & Related papers (2023-10-12T05:09:27Z) - Detachedly Learn a Classifier for Class-Incremental Learning [11.865788374587734]
We present an analysis that the failure of vanilla experience replay (ER) comes from unnecessary re-learning of previous tasks and incompetence to distinguish current task from the previous ones.
We propose a novel replay strategy task-aware experience replay.
Experimental results show our method outperforms current state-of-the-art methods.
arXiv Detail & Related papers (2023-02-23T01:35:44Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Continual Learning via Bit-Level Information Preserving [88.32450740325005]
We study the continual learning process through the lens of information theory.
We propose Bit-Level Information Preserving (BLIP) that preserves the information gain on model parameters.
BLIP achieves close to zero forgetting while only requiring constant memory overheads throughout continual learning.
arXiv Detail & Related papers (2021-05-10T15:09:01Z) - Meta-Learning with Sparse Experience Replay for Lifelong Language
Learning [26.296412053816233]
We propose a novel approach to lifelong learning of language tasks based on meta-learning with sparse experience replay.
We show that under the realistic setting of performing a single pass on a stream of tasks, our method obtains state-of-the-art results on lifelong text classification and relation extraction.
arXiv Detail & Related papers (2020-09-10T14:36:38Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Continual Learning: Tackling Catastrophic Forgetting in Deep Neural
Networks with Replay Processes [0.0]
Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting.
Generative Replay consists of regenerating past learning experiences with a generative model to remember them.
We show that they are very promising methods for continual learning.
arXiv Detail & Related papers (2020-07-01T13:44:33Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z) - Using Hindsight to Anchor Past Knowledge in Continual Learning [36.271906785418864]
In continual learning, the learner faces a stream of data whose distribution changes over time.
Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge.
In this work, we call anchoring, where the learner uses bilevel optimization to update its knowledge on the current task, while keeping intact the predictions on past tasks.
arXiv Detail & Related papers (2020-02-19T13:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.