Implicit Class-Conditioned Domain Alignment for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2006.04996v1
- Date: Tue, 9 Jun 2020 00:20:21 GMT
- Title: Implicit Class-Conditioned Domain Alignment for Unsupervised Domain
Adaptation
- Authors: Xiang Jiang, Qicheng Lao, Stan Matwin, Mohammad Havaei
- Abstract summary: Current methods for class-conditioned domain alignment aim to explicitly minimize a loss function based on pseudo-label estimations of the target domain.
We propose a method that removes the need for explicit optimization of model parameters from pseudo-labels directly.
We present a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels.
- Score: 18.90240379173491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach for unsupervised domain adaptation---with a strong
focus on practical considerations of within-domain class imbalance and
between-domain class distribution shift---from a class-conditioned domain
alignment perspective. Current methods for class-conditioned domain alignment
aim to explicitly minimize a loss function based on pseudo-label estimations of
the target domain. However, these methods suffer from pseudo-label bias in the
form of error accumulation. We propose a method that removes the need for
explicit optimization of model parameters from pseudo-labels directly. Instead,
we present a sampling-based implicit alignment approach, where the sample
selection procedure is implicitly guided by the pseudo-labels. Theoretical
analysis reveals the existence of a domain-discriminator shortcut in misaligned
classes, which is addressed by the proposed implicit alignment approach to
facilitate domain-adversarial learning. Empirical results and ablation studies
confirm the effectiveness of the proposed approach, especially in the presence
of within-domain class imbalance and between-domain class distribution shift.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation [17.97306640457707]
We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
arXiv Detail & Related papers (2023-08-21T10:35:32Z) - Conditional Support Alignment for Domain Adaptation with Label Shift [8.819673391477034]
Unlabelled domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on labeled samples on the source domain and unsupervised ones in the target domain.
We propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions.
arXiv Detail & Related papers (2023-05-29T05:20:18Z) - Prototypical Contrast Adaptation for Domain Adaptive Semantic
Segmentation [52.63046674453461]
Prototypical Contrast Adaptation (ProCA) is a contrastive learning method for unsupervised domain adaptive semantic segmentation.
ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation.
arXiv Detail & Related papers (2022-07-14T04:54:26Z) - Deep Least Squares Alignment for Unsupervised Domain Adaptation [6.942003070153651]
Unsupervised domain adaptation leverages rich information from a labeled source domain to model an unlabeled target domain.
We propose deep least squares alignment (DLSA) to estimate the distribution of the two domains in a latent space by parameterizing a linear model.
Extensive experiments demonstrate that the proposed DLSA model is effective in aligning domain distributions and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-11-03T13:23:06Z) - Cross-domain error minimization for unsupervised domain adaptation [2.9766397696234996]
Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
Previous methods focus on learning domain-invariant features to decrease the discrepancy between the feature distributions and minimize the source error.
We propose a curriculum learning based strategy to select the target samples with more accurate pseudo-labels during training.
arXiv Detail & Related papers (2021-06-29T02:00:29Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Class Distribution Alignment for Adversarial Domain Adaptation [32.95056492475652]
Conditional ADversarial Image Translation (CADIT) is proposed to explicitly align the class distributions given samples between the two domains.
It integrates a discriminative structure-preserving loss and a joint adversarial generation loss.
Our approach achieves superior classification in the target domain when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-04-20T15:58:11Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.