CLASSIC: Continual and Contrastive Learning of Aspect Sentiment
Classification Tasks
- URL: http://arxiv.org/abs/2112.02714v1
- Date: Sun, 5 Dec 2021 23:55:53 GMT
- Title: CLASSIC: Continual and Contrastive Learning of Aspect Sentiment
Classification Tasks
- Authors: Zixuan Ke, Bing Liu, Hu Xu, Lei Shu
- Abstract summary: This paper studies continual learning of a sequence of aspect sentiment classification(ASC) tasks in a particular CL setting called domain incremental learning (DIL)
The DIL setting is particularly suited to ASC because in testing the system needs not know the task/domain to which the test data belongs.
The key novelty is a contrastive continual learning method that enables both knowledge transfer across tasks and knowledge distillation from old tasks to the new task.
- Score: 23.515930312505954
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper studies continual learning (CL) of a sequence of aspect sentiment
classification(ASC) tasks in a particular CL setting called domain incremental
learning (DIL). Each task is from a different domain or product. The DIL
setting is particularly suited to ASC because in testing the system needs not
know the task/domain to which the test data belongs. To our knowledge, this
setting has not been studied before for ASC. This paper proposes a novel model
called CLASSIC. The key novelty is a contrastive continual learning method that
enables both knowledge transfer across tasks and knowledge distillation from
old tasks to the new task, which eliminates the need for task ids in testing.
Experimental results show the high effectiveness of CLASSIC.
Related papers
- Class Incremental Learning with Task-Specific Batch Normalization and Out-of-Distribution Detection [25.224930928724326]
This study focuses on incremental learning for image classification, exploring how to reduce catastrophic forgetting of all learned knowledge when access to old data is restricted due to memory or privacy constraints.
The challenge of incremental learning lies in achieving an optimal balance between plasticity, the ability to learn new knowledge, and stability, the ability to retain old knowledge.
arXiv Detail & Related papers (2024-11-01T07:54:29Z) - Versatile Incremental Learning: Towards Class and Domain-Agnostic Incremental Learning [16.318126586825734]
Incremental Learning (IL) aims to accumulate knowledge from sequential input tasks.
We consider a more challenging and realistic but under-explored IL scenario, named Versatile Incremental Learning (VIL)
We propose a simple yet effective IL framework, named Incremental with Shift cONtrol (ICON)
arXiv Detail & Related papers (2024-09-17T07:44:28Z) - Multi-Label Continual Learning for the Medical Domain: A Novel Benchmark [47.52603262576663]
We propose a novel benchmark combining the challenges of new class arrivals and domain shifts in a single framework.
This benchmark aims to model a realistic CL setting for the multi-label classification problem in medical imaging.
arXiv Detail & Related papers (2024-04-10T09:35:36Z) - Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt [58.880105981772324]
We propose a novel framework named Attention-aware Self-adaptive Prompt (ASP)
ASP encourages task-invariant prompts to capture shared knowledge by reducing specific information from the attention aspect.
In summary, ASP prevents overfitting on base task and does not require enormous data in few-shot incremental tasks.
arXiv Detail & Related papers (2024-03-14T20:34:53Z) - LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning [64.55001982176226]
LIBERO is a novel benchmark of lifelong learning for robot manipulation.
We focus on how to efficiently transfer declarative knowledge, procedural knowledge, or the mixture of both.
We develop an extendible procedural generation pipeline that can in principle generate infinitely many tasks.
arXiv Detail & Related papers (2023-06-05T23:32:26Z) - Resolving Task Confusion in Dynamic Expansion Architectures for Class
Incremental Learning [27.872317837451977]
Task Correlated Incremental Learning (TCIL) is proposed to encourage discriminative and fair feature utilization across tasks.
TCIL performs a multi-level knowledge distillation to propagate knowledge learned from old tasks to the new one.
The results demonstrate that TCIL consistently achieves state-of-the-art accuracy.
arXiv Detail & Related papers (2022-12-29T12:26:44Z) - Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning [60.501201259732625]
We introduce task-adaptive saliency for EFCIL and propose a new framework, which we call Task-Adaptive Saliency Supervision (TASS)
Our experiments demonstrate that our method can better preserve saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100, Tiny-ImageNet, and ImageNet-Subset EFCIL benchmarks.
arXiv Detail & Related papers (2022-12-16T02:43:52Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Continual Learning with Knowledge Transfer for Sentiment Classification [20.5365406439092]
KAN can markedly improve the accuracy of both the new task and the old tasks via forward and backward knowledge transfer.
The effectiveness of KAN is demonstrated through extensive experiments.
arXiv Detail & Related papers (2021-12-18T22:58:21Z) - Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment
Classification Tasks [22.28374603976649]
This paper studies continual learning of a sequence of aspect sentiment classification (ASC) tasks.
A CL system that incrementally learns a sequence of ASC tasks should address the following two issues.
A novel capsule network based model called B-CL is proposed to address these issues.
arXiv Detail & Related papers (2021-12-06T02:46:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.