Hard Class Rectification for Domain Adaptation
- URL: http://arxiv.org/abs/2008.03455v2
- Date: Tue, 20 Apr 2021 00:03:55 GMT
- Title: Hard Class Rectification for Domain Adaptation
- Authors: Yunlong Zhang, Changxing Jing, Huangxing Lin, Chaoqi Chen, Yue Huang,
Xinghao Ding, Yang Zou
- Abstract summary: Domain adaptation (DA) aims to transfer knowledge from a label-rich domain (source domain) to a label-scare domain (target domain)
We propose a novel framework, called Hard Class Rectification Pseudo-labeling (HCRPL), to alleviate the hard class problem.
The proposed method is evaluated in both unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA)
- Score: 36.58361356407803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) aims to transfer knowledge from a label-rich and
related domain (source domain) to a label-scare domain (target domain).
Pseudo-labeling has recently been widely explored and used in DA. However, this
line of research is still confined to the inaccuracy of pseudo-labels. In this
paper, we reveal an interesting observation that the target samples belonging
to the classes with larger domain shift are easier to be misclassified compared
with the other classes. These classes are called hard class, which deteriorates
the performance of DA and restricts the applications of DA. We propose a novel
framework, called Hard Class Rectification Pseudo-labeling (HCRPL), to
alleviate the hard class problem from two aspects. First, as is difficult to
identify the target samples as hard class, we propose a simple yet effective
scheme, named Adaptive Prediction Calibration (APC), to calibrate the
predictions of the target samples according to the difficulty degree for each
class. Second, we further consider that the predictions of target samples
belonging to the hard class are vulnerable to perturbations. To prevent these
samples to be misclassified easily, we introduce Temporal-Ensembling (TE) and
Self-Ensembling (SE) to obtain consistent predictions. The proposed method is
evaluated in both unsupervised domain adaptation (UDA) and semi-supervised
domain adaptation (SSDA). The experimental results on several real-world
cross-domain benchmarks, including ImageCLEF, Office-31 and Office-Home,
substantiates the superiority of the proposed method.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment [58.56087979262192]
Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain.
The performance of OSDA methods degrades drastically under intra-domain class imbalance and inter-domain label shift.
We propose Open-set Moving-threshold Estimation and Gradual Alignment (OMEGA) to alleviate the negative effects raised by label shift.
arXiv Detail & Related papers (2023-03-08T05:55:02Z) - Domain Adaptation under Open Set Label Shift [39.424134505152544]
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS)
OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning.
We propose practical methods for both tasks that leverage black-box predictors.
arXiv Detail & Related papers (2022-07-26T17:09:48Z) - Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised
Domain Adaptation [76.01237757257864]
This paper studies a new, practical but challenging problem, called Class-Incremental Unsupervised Domain Adaptation (CI-UDA)
The labeled source domain contains all classes, but the classes in the unlabeled target domain increase sequentially.
We propose a novel Prototype-guided Continual Adaptation (ProCA) method, consisting of two solution strategies.
arXiv Detail & Related papers (2022-07-22T03:22:36Z) - Loss-based Sequential Learning for Active Domain Adaptation [14.366263836801485]
This paper introduces sequential learning considering both domain type (source/target) or labelness (labeled/unlabeled)
Our model significantly outperforms previous methods as well as baseline models in various benchmark datasets.
arXiv Detail & Related papers (2022-04-25T14:00:04Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.