Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2207.10856v1
- Date: Fri, 22 Jul 2022 03:22:36 GMT
- Title: Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised
Domain Adaptation
- Authors: Hongbin Lin, Yifan Zhang, Zhen Qiu, Shuaicheng Niu, Chuang Gan, Yanxia
Liu, Mingkui Tan
- Abstract summary: This paper studies a new, practical but challenging problem, called Class-Incremental Unsupervised Domain Adaptation (CI-UDA)
The labeled source domain contains all classes, but the classes in the unlabeled target domain increase sequentially.
We propose a novel Prototype-guided Continual Adaptation (ProCA) method, consisting of two solution strategies.
- Score: 76.01237757257864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies a new, practical but challenging problem, called
Class-Incremental Unsupervised Domain Adaptation (CI-UDA), where the labeled
source domain contains all classes, but the classes in the unlabeled target
domain increase sequentially. This problem is challenging due to two
difficulties. First, source and target label sets are inconsistent at each time
step, which makes it difficult to conduct accurate domain alignment. Second,
previous target classes are unavailable in the current step, resulting in the
forgetting of previous knowledge. To address this problem, we propose a novel
Prototype-guided Continual Adaptation (ProCA) method, consisting of two
solution strategies. 1) Label prototype identification: we identify target
label prototypes by detecting shared classes with cumulative prediction
probabilities of target samples. 2) Prototype-based alignment and replay: based
on the identified label prototypes, we align both domains and enforce the model
to retain previous knowledge. With these two strategies, ProCA is able to adapt
the source model to a class-incremental unlabeled target domain effectively.
Extensive experiments demonstrate the effectiveness and superiority of ProCA in
resolving CI-UDA. The source code is available at
https://github.com/Hongbin98/ProCA.git
Related papers
- Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - Prototypical Contrast Adaptation for Domain Adaptive Semantic
Segmentation [52.63046674453461]
Prototypical Contrast Adaptation (ProCA) is a contrastive learning method for unsupervised domain adaptive semantic segmentation.
ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation.
arXiv Detail & Related papers (2022-07-14T04:54:26Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Hard Class Rectification for Domain Adaptation [36.58361356407803]
Domain adaptation (DA) aims to transfer knowledge from a label-rich domain (source domain) to a label-scare domain (target domain)
We propose a novel framework, called Hard Class Rectification Pseudo-labeling (HCRPL), to alleviate the hard class problem.
The proposed method is evaluated in both unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA)
arXiv Detail & Related papers (2020-08-08T06:21:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.