Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2204.03838v1
- Date: Fri, 8 Apr 2022 04:40:18 GMT
- Title: Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation
- Authors: Lin Chen, Huaian Chen, Zhixiang Wei, Xin Jin, Xiao Tan, Yi Jin, Enhong
Chen
- Abstract summary: We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
- Score: 55.27563366506407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial learning has achieved remarkable performances for unsupervised
domain adaptation (UDA). Existing adversarial UDA methods typically adopt an
additional discriminator to play the min-max game with a feature extractor.
However, most of these methods failed to effectively leverage the predicted
discriminative information, and thus cause mode collapse for generator. In this
work, we address this problem from a different perspective and design a simple
yet effective adversarial paradigm in the form of a discriminator-free
adversarial learning network (DALN), wherein the category classifier is reused
as a discriminator, which achieves explicit domain alignment and category
distinguishment through a unified objective, enabling the DALN to leverage the
predicted discriminative information for sufficient feature alignment.
Basically, we introduce a Nuclear-norm Wasserstein discrepancy (NWD) that has
definite guidance meaning for performing discrimination. Such NWD can be
coupled with the classifier to serve as a discriminator satisfying the
K-Lipschitz constraint without the requirements of additional weight clipping
or gradient penalty strategy. Without bells and whistles, DALN compares
favorably against the existing state-of-the-art (SOTA) methods on a variety of
public datasets. Moreover, as a plug-and-play technique, NWD can be directly
used as a generic regularizer to benefit existing UDA algorithms. Code is
available at https://github.com/xiaoachen98/DALN.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.