Anti-Retroactive Interference for Lifelong Learning
- URL: http://arxiv.org/abs/2208.12967v1
- Date: Sat, 27 Aug 2022 09:27:36 GMT
- Title: Anti-Retroactive Interference for Lifelong Learning
- Authors: Runqi Wang, Yuxiang Bao, Baochang Zhang, Jianzhuang Liu, Wentao Zhu
and Guodong Guo
- Abstract summary: We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
- Score: 65.50683752919089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans can continuously learn new knowledge. However, machine learning models
suffer from drastic dropping in performance on previous tasks after learning
new tasks. Cognitive science points out that the competition of similar
knowledge is an important cause of forgetting. In this paper, we design a
paradigm for lifelong learning based on meta-learning and associative mechanism
of the brain. It tackles the problem from two aspects: extracting knowledge and
memorizing knowledge. First, we disrupt the sample's background distribution
through a background attack, which strengthens the model to extract the key
features of each task. Second, according to the similarity between incremental
knowledge and base knowledge, we design an adaptive fusion of incremental
knowledge, which helps the model allocate capacity to the knowledge of
different difficulties. It is theoretically analyzed that the proposed learning
paradigm can make the models of different tasks converge to the same optimum.
The proposed method is validated on the MNIST, CIFAR100, CUB200 and ImageNet100
datasets.
Related papers
- Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning [79.46570165281084]
We propose a Multi-Stage Knowledge Integration network (MulKI) to emulate the human learning process in distillation methods.
MulKI achieves this through four stages, including Eliciting Ideas, Adding New Ideas, Distinguishing Ideas, and Making Connections.
Our method demonstrates significant improvements in maintaining zero-shot capabilities while supporting continual learning across diverse downstream tasks.
arXiv Detail & Related papers (2024-11-11T07:36:19Z) - Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models [51.20499954955646]
Large language models (LLMs) acquire vast amounts of knowledge from extensive text corpora during the pretraining phase.
In later stages such as fine-tuning and inference, the model may encounter knowledge not covered in the initial training.
We propose a two-stage fine-tuning strategy to improve the model's overall test accuracy and knowledge retention.
arXiv Detail & Related papers (2024-10-08T08:35:16Z) - Recall-Oriented Continual Learning with Generative Adversarial
Meta-Model [5.710971447109951]
We propose a recall-oriented continual learning framework to address the stability-plasticity dilemma.
Inspired by the human brain's ability to separate the mechanisms responsible for stability and plasticity, our framework consists of a two-level architecture.
We show that our framework not only effectively learns new knowledge without any disruption but also achieves high stability of previous knowledge.
arXiv Detail & Related papers (2024-03-05T16:08:59Z) - The Effect of Masking Strategies on Knowledge Retention by Language
Models [9.130890741447422]
This paper aims to understand the effect of pre-training tasks on the amount of knowledge captured and forgotten by language models.
We test the model's knowledge retention by measuring its ability to answer factual questions.
Our findings demonstrate that, like the ability to perform a task, the knowledge acquired from being trained on that task is forgotten when a model is trained to perform another task.
arXiv Detail & Related papers (2023-06-12T15:35:23Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.