Bi-Classifier Determinacy Maximization for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2012.06995v1
- Date: Sun, 13 Dec 2020 07:55:39 GMT
- Title: Bi-Classifier Determinacy Maximization for Unsupervised Domain
Adaptation
- Authors: Shuang Li, Fangrui Lv, Binhui Xie, Chi Harold Liu, Jian Liang, Chen
Qin
- Abstract summary: We present Bi-Classifier Determinacy Maximization(BCDM) to tackle this problem.
Motivated by the observation that target samples cannot always be separated distinctly by the decision boundary, we design a novel classifier determinacy disparity metric.
BCDM can generate discriminative representations by encouraging target predictive outputs to be consistent and determined.
- Score: 24.9073164947711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation challenges the problem of transferring
knowledge from a well-labelled source domain to an unlabelled target domain.
Recently,adversarial learning with bi-classifier has been proven effective in
pushing cross-domain distributions close. Prior approaches typically leverage
the disagreement between bi-classifier to learn transferable representations,
however, they often neglect the classifier determinacy in the target domain,
which could result in a lack of feature discriminability. In this paper, we
present a simple yet effective method, namely Bi-Classifier Determinacy
Maximization(BCDM), to tackle this problem. Motivated by the observation that
target samples cannot always be separated distinctly by the decision boundary,
here in the proposed BCDM, we design a novel classifier determinacy disparity
(CDD) metric, which formulates classifier discrepancy as the class relevance of
distinct target predictions and implicitly introduces constraint on the target
feature discriminability. To this end, the BCDM can generate discriminative
representations by encouraging target predictive outputs to be consistent and
determined, meanwhile, preserve the diversity of predictions in an adversarial
manner. Furthermore, the properties of CDD as well as the theoretical
guarantees of BCDM's generalization bound are both elaborated. Extensive
experiments show that BCDM compares favorably against the existing
state-of-the-art domain adaptation methods.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Rethinking Domain Generalization: Discriminability and Generalizability [31.967801550742312]
Domain generalization (DG) endeavors to develop robust models that possess strong generalizability while preserving excellent discriminability.
We present a novel framework, Discriminative Microscopic Distribution Alignment(DMDA)
DMDA incorporates two core components: Selective Channel Pruning( SCP) and Micro-level Distribution Alignment(MDA)
arXiv Detail & Related papers (2023-09-28T14:45:54Z) - Dirichlet-based Uncertainty Calibration for Active Domain Adaptation [33.33529827699169]
Active domain adaptation (DA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate.
Traditional active learning methods may be less effective since they do not consider the domain shift issue.
We propose a itDirichlet-based Uncertainty (DUC) approach for active DA, which simultaneously achieves the mitigation of miscalibration and the selection of informative target samples.
arXiv Detail & Related papers (2023-02-27T14:33:29Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Unsupervised Domain Adaptation via Discriminative Manifold Propagation [26.23123292060868]
Unsupervised domain adaptation is effective in leveraging rich information from a labeled source domain to an unlabeled target domain.
The proposed method can be used to tackle a series of variants of domain adaptation problems, including both vanilla and partial settings.
arXiv Detail & Related papers (2020-08-23T12:31:37Z) - Learning transferable and discriminative features for unsupervised
domain adaptation [6.37626180021317]
Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an unlabeled target domain.
In this paper, a novel method called textitlearning TransFerable and Discriminative Features for unsupervised domain adaptation (TLearning) is proposed to optimize these two objectives simultaneously.
Comprehensive experiments are conducted on five real-world datasets and the results verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-03-26T03:15:09Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.