Domain Incremental Lifelong Learning in an Open World
- URL: http://arxiv.org/abs/2305.06555v1
- Date: Thu, 11 May 2023 04:19:08 GMT
- Title: Domain Incremental Lifelong Learning in an Open World
- Authors: Yi Dai, Hao Lang, Yinhe Zheng, Bowen Yu, Fei Huang, Yongbin Li
- Abstract summary: We propose textbfDiana: a underlinedynamunderlineic underlinearchitecture-based lifelounderlineng leunderlinearning model.
Four types of hierarchically organized prompts are used in Diana to capture knowledge from different granularities.
- Score: 45.704746275089555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lifelong learning (LL) is an important ability for NLP models to learn new
tasks continuously. Architecture-based approaches are reported to be effective
implementations for LL models. However, it is non-trivial to extend previous
approaches to domain incremental LL scenarios since they either require access
to task identities in the testing phase or cannot handle samples from unseen
tasks. In this paper, we propose \textbf{Diana}: a
\underline{d}ynam\underline{i}c \underline{a}rchitecture-based
lifelo\underline{n}g le\underline{a}rning model that tries to learn a sequence
of tasks with a prompt-enhanced language model. Four types of hierarchically
organized prompts are used in Diana to capture knowledge from different
granularities. Specifically, we dedicate task-level prompts to capture
task-specific knowledge to retain high LL performances and maintain
instance-level prompts to learn knowledge shared across input samples to
improve the model's generalization performance. Moreover, we dedicate separate
prompts to explicitly model unseen tasks and introduce a set of prompt key
vectors to facilitate knowledge sharing between tasks. Extensive experiments
demonstrate that Diana outperforms state-of-the-art LL models, especially in
handling unseen tasks. We release the code and data at
\url{https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana}.
Related papers
- PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - PL-FSCIL: Harnessing the Power of Prompts for Few-Shot Class-Incremental Learning [9.247718160705512]
Few-Shot Class-Incremental Learning (FSCIL) aims to enable deep neural networks to learn new tasks incrementally from a small number of labeled samples.
We propose a novel approach called Prompt Learning for FSCIL (PL-FSCIL)
PL-FSCIL harnesses the power of prompts in conjunction with a pre-trained Vision Transformer (ViT) model to address the challenges of FSCIL effectively.
arXiv Detail & Related papers (2024-01-26T12:11:04Z) - Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning [15.41342100228504]
In deep learning, auxiliary objectives are often used to facilitate learning in situations where data is scarce.
We propose a novel framework, dubbed Detaux, whereby a weakly supervised disentanglement procedure is used to discover new unrelated classification tasks.
arXiv Detail & Related papers (2023-10-13T17:40:39Z) - Introducing Language Guidance in Prompt-based Continual Learning [95.03110230754423]
We propose Language Guidance for Prompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods.
LGCL consistently improves the performance of prompt-based continual learning methods to set a new state-of-the art.
arXiv Detail & Related papers (2023-08-30T08:03:49Z) - Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong
Learning in Task-Oriented Dialogue [80.05509768165135]
generative replay methods are widely employed to consolidate past knowledge with generated pseudo samples.
Most existing generative replay methods use only a single task-specific token to control their models.
We propose a novel method, prompt conditioned VAE for lifelong learning, to enhance generative replay by incorporating tasks' statistics.
arXiv Detail & Related papers (2022-10-14T13:12:14Z) - Continuous QA Learning with Structured Prompts [20.246786740364133]
Diana is a dynamic architecture-based lifelong QA model that tries to learn a sequence of QA tasks.
Four types of hierarchically organized prompts are used in Diana to capture QA knowledge from different granularities.
In experiments, Diana outperforms state-of-the-art lifelong QA models, especially in handling unseen tasks.
arXiv Detail & Related papers (2022-08-31T02:38:16Z) - Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation
with Large Language Models [116.25562358482962]
State-of-the-art neural language models can be used to solve ad-hoc language tasks without the need for supervised training.
PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts.
arXiv Detail & Related papers (2022-08-16T17:17:53Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - Learning to Prompt for Continual Learning [34.609384246149325]
This work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time.
Our method learns to dynamically prompt (L2P) a pre-trained model to learn tasks sequentially under different task transitions.
The objective is to optimize prompts to instruct the model prediction and explicitly manage task-invariant and task-specific knowledge while maintaining model plasticity.
arXiv Detail & Related papers (2021-12-16T06:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.