Learning Task-oriented Disentangled Representations for Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2007.13264v1
- Date: Mon, 27 Jul 2020 01:21:18 GMT
- Title: Learning Task-oriented Disentangled Representations for Unsupervised
Domain Adaptation
- Authors: Pingyang Dai, Peixian Chen, Qiong Wu, Xiaopeng Hong, Qixiang Ye, Qi
Tian, Rongrong Ji
- Abstract summary: Unsupervised domain adaptation (UDA) aims to address the domain-shift problem between a labeled source domain and an unlabeled target domain.
We propose a dynamic task-oriented disentangling network (DTDN) to learn disentangled representations in an end-to-end fashion for UDA.
- Score: 165.61511788237485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) aims to address the domain-shift problem
between a labeled source domain and an unlabeled target domain. Many efforts
have been made to address the mismatch between the distributions of training
and testing data, but unfortunately, they ignore the task-oriented information
across domains and are inflexible to perform well in complicated open-set
scenarios. Many efforts have been made to eliminate the mismatch between the
distributions of training and testing data by learning domain-invariant
representations. However, the learned representations are usually not
task-oriented, i.e., being class-discriminative and domain-transferable
simultaneously. This drawback limits the flexibility of UDA in complicated
open-set tasks where no labels are shared between domains. In this paper, we
break the concept of task-orientation into task-relevance and task-irrelevance,
and propose a dynamic task-oriented disentangling network (DTDN) to learn
disentangled representations in an end-to-end fashion for UDA. The dynamic
disentangling network effectively disentangles data representations into two
components: the task-relevant ones embedding critical information associated
with the task across domains, and the task-irrelevant ones with the remaining
non-transferable or disturbing information. These two components are
regularized by a group of task-specific objective functions across domains.
Such regularization explicitly encourages disentangling and avoids the use of
generative models or decoders. Experiments in complicated, open-set scenarios
(retrieval tasks) and empirical benchmarks (classification tasks) demonstrate
that the proposed method captures rich disentangled information and achieves
superior performance.
Related papers
- Zero-shot domain adaptation based on dual-level mix and contrast [8.225819874406238]
This paper proposes a new ZSDA method to learn domain-invariant features with low task bias.
Experimental results show that our proposal achieves good performance on several benchmarks.
arXiv Detail & Related papers (2024-06-27T08:37:26Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains [25.137859989323537]
Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem.
We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces.
We show how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
arXiv Detail & Related papers (2022-10-13T13:24:34Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Task-specific Inconsistency Alignment for Domain Adaptive Object
Detection [38.027790951157705]
Detectors trained with massive labeled data often exhibit dramatic performance degradation in certain scenarios with data distribution gap.
We propose Task-specific Inconsistency Alignment (TIA), by developing a new alignment mechanism in separate task spaces.
TIA demonstrates superior results on various scenarios to the previous state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:36:33Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Self-Taught Cross-Domain Few-Shot Learning with Weakly Supervised Object
Localization and Task-Decomposition [84.24343796075316]
We propose a task-expansion-decomposition framework for Cross-Domain Few-Shot Learning.
The proposed Self-Taught (ST) approach alleviates the problem of non-target guidance by constructing task-oriented metric spaces.
We conduct experiments under the cross-domain setting including 8 target domains: CUB, Cars, Places, Plantae, CropDieases, EuroSAT, ISIC, and ChestX.
arXiv Detail & Related papers (2021-09-03T04:23:07Z) - Learning Cascaded Detection Tasks with Weakly-Supervised Domain
Adaptation [44.420874740728095]
We propose a weakly supervised domain adaptation setting which exploits the structure of cascaded detection tasks.
In particular, we learn to infer the attributes solely from the source domain while leveraging 2D bounding boxes as weak labels in both domains.
As our experiments demonstrate, the approach is competitive with fully supervised settings while outperforming unsupervised adaptation approaches by a large margin.
arXiv Detail & Related papers (2021-07-09T16:18:12Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.