Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2106.04151v1
- Date: Tue, 8 Jun 2021 07:35:40 GMT
- Title: Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation
- Authors: Zhekai Du, Jingjing Li, Hongzu Su, Lei Zhu, Ke Lu
- Abstract summary: Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
- Score: 22.852237073492894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned
from a well-labeled source domain to an unlabeled target domain. Recently,
adversarial domain adaptation with two distinct classifiers (bi-classifier) has
been introduced into UDA which is effective to align distributions between
different domains. Previous bi-classifier adversarial learning methods only
focus on the similarity between the outputs of two distinct classifiers.
However, the similarity of the outputs cannot guarantee the accuracy of target
samples, i.e., target samples may match to wrong categories even if the
discrepancy between two classifiers is small. To challenge this issue, in this
paper, we propose a cross-domain gradient discrepancy minimization (CGDM)
method which explicitly minimizes the discrepancy of gradients generated by
source samples and target samples. Specifically, the gradient gives a cue for
the semantic information of target samples so it can be used as a good
supervision to improve the accuracy of target samples. In order to compute the
gradient signal of target samples, we further obtain target pseudo labels
through a clustering-based self-supervised learning. Extensive experiments on
three widely used UDA datasets show that our method surpasses many previous
state-of-the-arts. Codes are available at https://github.com/lijin118/CGDM.
Related papers
- Evidential Graph Contrastive Alignment for Source-Free Blending-Target Domain Adaptation [3.0134158269410207]
We propose a new method called Evidential Contrastive Alignment (ECA) to decouple the blending target domain and alleviate the effect from noisy target pseudo labels.
ECA outperforms other methods with considerable gains and achieves comparable results compared with those that have domain labels or source data in prior.
arXiv Detail & Related papers (2024-08-14T13:02:20Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation [29.43532067090422]
We propose an easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on adversarial learning.
Unlike most existing approaches which employ a generator to deal with domain difference, MMEN focuses on learning the categorical information from unlabeled target samples.
arXiv Detail & Related papers (2019-04-21T13:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.