Generalized Few-Shot Continual Learning with Contrastive Mixture of
Adapters
- URL: http://arxiv.org/abs/2302.05936v1
- Date: Sun, 12 Feb 2023 15:18:14 GMT
- Title: Generalized Few-Shot Continual Learning with Contrastive Mixture of
Adapters
- Authors: Yawen Cui, Zitong Yu, Rizhao Cai, Xun Wang, Alex C. Kot, Li Liu
- Abstract summary: We set up a Generalized FSCL (GFSCL) protocol involving both class- and domain-incremental situations.
We find that common continual learning methods have poor generalization ability on unseen domains.
In this way, we propose a rehearsal-free framework based on Vision Transformer (ViT) named Contrastive Mixture of Adapters (CMoA)
- Score: 59.82088750033897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of Few-Shot Continual Learning (FSCL) is to incrementally learn
novel tasks with limited labeled samples and preserve previous capabilities
simultaneously, while current FSCL methods are all for the class-incremental
purpose. Moreover, the evaluation of FSCL solutions is only the cumulative
performance of all encountered tasks, but there is no work on exploring the
domain generalization ability. Domain generalization is a challenging yet
practical task that aims to generalize beyond training domains. In this paper,
we set up a Generalized FSCL (GFSCL) protocol involving both class- and
domain-incremental situations together with the domain generalization
assessment. Firstly, two benchmark datasets and protocols are newly arranged,
and detailed baselines are provided for this unexplored configuration. We find
that common continual learning methods have poor generalization ability on
unseen domains and cannot better cope with the catastrophic forgetting issue in
cross-incremental tasks. In this way, we further propose a rehearsal-free
framework based on Vision Transformer (ViT) named Contrastive Mixture of
Adapters (CMoA). Due to different optimization targets of class increment and
domain increment, the CMoA contains two parts: (1) For the class-incremental
issue, the Mixture of Adapters (MoA) module is incorporated into ViT, then
cosine similarity regularization and the dynamic weighting are designed to make
each adapter learn specific knowledge and concentrate on particular classes.
(2) For the domain-related issues and domain-invariant representation learning,
we alleviate the inner-class variation by prototype-calibrated contrastive
learning. The codes and protocols are available at
https://github.com/yawencui/CMoA.
Related papers
- Versatile Incremental Learning: Towards Class and Domain-Agnostic Incremental Learning [16.318126586825734]
Incremental Learning (IL) aims to accumulate knowledge from sequential input tasks.
We consider a more challenging and realistic but under-explored IL scenario, named Versatile Incremental Learning (VIL)
We propose a simple yet effective IL framework, named Incremental with Shift cONtrol (ICON)
arXiv Detail & Related papers (2024-09-17T07:44:28Z) - More Than Catastrophic Forgetting: Integrating General Capabilities For Domain-Specific LLMs [40.54076184225558]
The performance on general tasks decreases after Large Language Models (LLMs) are fine-tuned on domain-specific tasks, known as Catastrophic Forgetting (CF)
This paper presents a challenge for real application of domain-specific LLMs beyond CF, called General Capabilities Integration (GCI)
The objective of GCI is not merely to retain previously acquired general capabilities alongside new domain knowledge, but to harmonize and utilize both sets of skills in a cohesive manner to enhance performance on domain-specific tasks.
arXiv Detail & Related papers (2024-05-28T05:00:12Z) - CFPL-FAS: Class Free Prompt Learning for Generalizable Face Anti-spoofing [66.6712018832575]
Domain generalization (DG) based Face Anti-Spoofing (FAS) aims to improve the model's performance on unseen domains.
We make use of large-scale VLMs like CLIP and leverage the textual feature to dynamically adjust the classifier's weights for exploring generalizable visual features.
arXiv Detail & Related papers (2024-03-21T11:58:50Z) - MemSAC: Memory Augmented Sample Consistency for Large Scale Unsupervised
Domain Adaptation [71.4942277262067]
We propose MemSAC, which exploits sample level similarity across source and target domains to achieve discriminative transfer.
We provide in-depth analysis and insights into the effectiveness of MemSAC.
arXiv Detail & Related papers (2022-07-25T17:55:28Z) - Learning towards Synchronous Network Memorizability and Generalizability
for Continual Segmentation across Multiple Sites [52.84959869494459]
In clinical practice, a segmentation network is often required to continually learn on a sequential data stream from multiple sites.
Existing methods are usually restricted in either network memorizability on previous sites or generalizability on unseen sites.
This paper aims to tackle the problem of Synchronous Memorizability and Generalizability with a novel proposed SMG-learning framework.
arXiv Detail & Related papers (2022-06-14T13:04:36Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.