Attract, Perturb, and Explore: Learning a Feature Alignment Network for
Semi-supervised Domain Adaptation
- URL: http://arxiv.org/abs/2007.09375v1
- Date: Sat, 18 Jul 2020 09:26:25 GMT
- Title: Attract, Perturb, and Explore: Learning a Feature Alignment Network for
Semi-supervised Domain Adaptation
- Authors: Taekyung Kim and Changick Kim
- Abstract summary: We study the novel setting of the semi-supervised domain adaptation (SSDA) problem.
Our framework consists of three schemes, i.e., attraction, perturbation, and exploration.
Our method achieves state-of-the-art performances on all datasets.
- Score: 34.81203184926791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although unsupervised domain adaptation methods have been widely adopted
across several computer vision tasks, it is more desirable if we can exploit a
few labeled data from new domains encountered in a real application. The novel
setting of the semi-supervised domain adaptation (SSDA) problem shares the
challenges with the domain adaptation problem and the semi-supervised learning
problem. However, a recent study shows that conventional domain adaptation and
semi-supervised learning methods often result in less effective or negative
transfer in the SSDA problem. In order to interpret the observation and address
the SSDA problem, in this paper, we raise the intra-domain discrepancy issue
within the target domain, which has never been discussed so far. Then, we
demonstrate that addressing the intra-domain discrepancy leads to the ultimate
goal of the SSDA problem. We propose an SSDA framework that aims to align
features via alleviation of the intra-domain discrepancy. Our framework mainly
consists of three schemes, i.e., attraction, perturbation, and exploration.
First, the attraction scheme globally minimizes the intra-domain discrepancy
within the target domain. Second, we demonstrate the incompatibility of the
conventional adversarial perturbation methods with SSDA. Then, we present a
domain adaptive adversarial perturbation scheme, which perturbs the given
target samples in a way that reduces the intra-domain discrepancy. Finally, the
exploration scheme locally aligns features in a class-wise manner complementary
to the attraction scheme by selectively aligning unlabeled target features
complementary to the perturbation scheme. We conduct extensive experiments on
domain adaptation benchmark datasets such as DomainNet, Office-Home, and
Office. Our method achieves state-of-the-art performances on all datasets.
Related papers
- Gradually Vanishing Gap in Prototypical Network for Unsupervised Domain Adaptation [32.58201185195226]
We propose an efficient UDA framework named Gradually Vanishing Gap in Prototypical Network (GVG-PN)
Our model achieves transfer learning from both global and local perspectives.
Experiments on several UDA benchmarks validated that the proposed GVG-PN can clearly outperform the SOTA models.
arXiv Detail & Related papers (2024-05-28T03:03:32Z) - Overcoming Negative Transfer by Online Selection: Distant Domain Adaptation for Fault Diagnosis [42.85741244467877]
The term distant domain adaptation problem' describes the challenge of adapting from a labeled source domain to a significantly disparate unlabeled target domain.
This problem exhibits the risk of negative transfer, where extraneous knowledge from the source domain adversely affects the target domain performance.
In response to this challenge, we propose a novel Online Selective Adversarial Alignment (OSAA) approach.
arXiv Detail & Related papers (2024-05-25T07:17:47Z) - IIDM: Inter and Intra-domain Mixing for Semi-supervised Domain Adaptation in Semantic Segmentation [46.6002506426648]
Unsupervised domain adaptation (UDA) is the dominant approach to solve this problem.
We propose semi-supervised domain adaptation (SSDA) to overcome this limitation.
We propose a novel framework that incorporates both Inter and Intra Domain Mixing (IIDM), where inter-domain mixing mitigates the source-target domain gap and intra-domain mixing enriches the available target domain information.
arXiv Detail & Related papers (2023-08-30T08:44:21Z) - Cross-Domain Policy Adaptation via Value-Guided Data Filtering [57.62692881606099]
Generalizing policies across different domains with dynamics mismatch poses a significant challenge in reinforcement learning.
We present the Value-Guided Data Filtering (VGDF) algorithm, which selectively shares transitions from the source domain based on the proximity of paired value targets.
arXiv Detail & Related papers (2023-05-28T04:08:40Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Domain Adaptation with Incomplete Target Domains [61.68950959231601]
We propose an Incomplete Data Imputation based Adversarial Network (IDIAN) model to address this new domain adaptation challenge.
In the proposed model, we design a data imputation module to fill the missing feature values based on the partial observations in the target domain.
We conduct experiments on both cross-domain benchmark tasks and a real world adaptation task with imperfect target domains.
arXiv Detail & Related papers (2020-12-03T00:07:40Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.