Correlated Adversarial Joint Discrepancy Adaptation Network
- URL: http://arxiv.org/abs/2105.08808v1
- Date: Tue, 18 May 2021 19:52:08 GMT
- Title: Correlated Adversarial Joint Discrepancy Adaptation Network
- Authors: Youshan Zhang and Brian D. Davison
- Abstract summary: We propose a novel approach called correlated adversarial joint discrepancy adaptation network (CAJNet)
By training the joint features, we can align the marginal and conditional distributions between the two domains.
In addition, we introduce a probability-based top-$mathcalK$ correlated label ($mathcalK$-label) which is a powerful indicator of the target domain.
- Score: 6.942003070153651
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Domain adaptation aims to mitigate the domain shift problem when transferring
knowledge from one domain into another similar but different domain. However,
most existing works rely on extracting marginal features without considering
class labels. Moreover, some methods name their model as so-called unsupervised
domain adaptation while tuning the parameters using the target domain label. To
address these issues, we propose a novel approach called correlated adversarial
joint discrepancy adaptation network (CAJNet), which minimizes the joint
discrepancy of two domains and achieves competitive performance with tuning
parameters using the correlated label. By training the joint features, we can
align the marginal and conditional distributions between the two domains. In
addition, we introduce a probability-based top-$\mathcal{K}$ correlated label
($\mathcal{K}$-label), which is a powerful indicator of the target domain and
effective metric to tune parameters to aid predictions. Extensive experiments
on benchmark datasets demonstrate significant improvements in classification
accuracy over the state of the art.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Domain-Invariant Feature Alignment Using Variational Inference For
Partial Domain Adaptation [6.04077629908308]
The proposed technique delivers superior and comparable accuracy to existing methods.
The experimental findings in numerous cross-domain classification tasks demonstrate that the proposed technique delivers superior and comparable accuracy to existing methods.
arXiv Detail & Related papers (2022-12-03T10:39:14Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Inductive Unsupervised Domain Adaptation for Few-Shot Classification via
Clustering [16.39667909141402]
Few-shot classification tends to struggle when it needs to adapt to diverse domains.
We introduce a framework, DaFeC, to improve Domain adaptation performance for Few-shot classification via Clustering.
Our approach outperforms previous work with absolute gains (in classification accuracy) of 4.95%, 9.55%, 3.99% and 11.62%, respectively.
arXiv Detail & Related papers (2020-06-23T08:17:48Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.