Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2204.03838v1
- Date: Fri, 8 Apr 2022 04:40:18 GMT
- Title: Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation
- Authors: Lin Chen, Huaian Chen, Zhixiang Wei, Xin Jin, Xiao Tan, Yi Jin, Enhong
Chen
- Abstract summary: We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
- Score: 55.27563366506407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial learning has achieved remarkable performances for unsupervised
domain adaptation (UDA). Existing adversarial UDA methods typically adopt an
additional discriminator to play the min-max game with a feature extractor.
However, most of these methods failed to effectively leverage the predicted
discriminative information, and thus cause mode collapse for generator. In this
work, we address this problem from a different perspective and design a simple
yet effective adversarial paradigm in the form of a discriminator-free
adversarial learning network (DALN), wherein the category classifier is reused
as a discriminator, which achieves explicit domain alignment and category
distinguishment through a unified objective, enabling the DALN to leverage the
predicted discriminative information for sufficient feature alignment.
Basically, we introduce a Nuclear-norm Wasserstein discrepancy (NWD) that has
definite guidance meaning for performing discrimination. Such NWD can be
coupled with the classifier to serve as a discriminator satisfying the
K-Lipschitz constraint without the requirements of additional weight clipping
or gradient penalty strategy. Without bells and whistles, DALN compares
favorably against the existing state-of-the-art (SOTA) methods on a variety of
public datasets. Moreover, as a plug-and-play technique, NWD can be directly
used as a generic regularizer to benefit existing UDA algorithms. Code is
available at https://github.com/xiaoachen98/DALN.
Related papers
- Discriminator-free Unsupervised Domain Adaptation for Multi-label Image
Classification [11.825795835537324]
A discriminator-free adversarial-based Unminator Domain Adaptation (UDA) for Multi-Label Image Classification (MLIC) is proposed.
The proposed method is evaluated on several multi-label image datasets covering three different types of domain shift.
arXiv Detail & Related papers (2023-01-25T14:45:13Z) - Mitigating Algorithmic Bias with Limited Annotations [65.060639928772]
When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias.
We propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias.
APOD shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
arXiv Detail & Related papers (2022-07-20T16:31:19Z) - A Semi-Supervised Adaptive Discriminative Discretization Method
Improving Discrimination Power of Regularized Naive Bayes [0.48342038441006785]
We propose a semi-supervised adaptive discriminative discretization framework for naive Bayes.
It could better estimate the data distribution by utilizing both labeled data and unlabeled data through pseudo-labeling techniques.
The proposed method also significantly reduces the information loss during discretization by utilizing an adaptive discriminative discretization scheme.
arXiv Detail & Related papers (2021-11-22T04:36:40Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - Exploring Dropout Discriminator for Domain Adaptation [27.19677042654432]
Adaptation of a classifier to new domains is one of the challenging problems in machine learning.
We propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution.
An ensemble of discriminators helps the model to learn the data distribution efficiently.
arXiv Detail & Related papers (2021-07-09T06:11:34Z) - Fairness via Representation Neutralization [60.90373932844308]
We propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF)
RNF achieves that fairness by debiasing only the task-specific classification head of DNN models.
Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models.
arXiv Detail & Related papers (2021-06-23T22:26:29Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Training GANs with Stronger Augmentations via Contrastive Discriminator [80.8216679195]
We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
arXiv Detail & Related papers (2021-03-17T16:04:54Z) - Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain
Adaptation using Structurally Regularized Deep Clustering [119.88565565454378]
Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain.
We propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one.
Our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings.
arXiv Detail & Related papers (2020-12-08T08:52:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.