Unsupervised Domain Expansion for Visual Categorization
- URL: http://arxiv.org/abs/2104.00233v1
- Date: Thu, 1 Apr 2021 03:27:35 GMT
- Title: Unsupervised Domain Expansion for Visual Categorization
- Authors: Jie Wang and Kaibin Tian and Dayong Ding and Gang Yang and Xirong Li
- Abstract summary: unsupervised domain expansion (UDE) aims to adapt a deep model for the target domain with its unlabeled data, while maintaining the model's performance on the source domain.
We develop a knowledge distillation based learning mechanism, enabling KDDE to optimize a single objective wherein the source and target domains are equally treated.
- Score: 12.427064803221729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Expanding visual categorization into a novel domain without the need of extra
annotation has been a long-term interest for multimedia intelligence.
Previously, this challenge has been approached by unsupervised domain
adaptation (UDA). Given labeled data from a source domain and unlabeled data
from a target domain, UDA seeks for a deep representation that is both
discriminative and domain-invariant. While UDA focuses on the target domain, we
argue that the performance on both source and target domains matters, as in
practice which domain a test example comes from is unknown. In this paper we
extend UDA by proposing a new task called unsupervised domain expansion (UDE),
which aims to adapt a deep model for the target domain with its unlabeled data,
meanwhile maintaining the model's performance on the source domain. We propose
Knowledge Distillation Domain Expansion (KDDE) as a general method for the UDE
task. Its domain-adaptation module can be instantiated with any existing model.
We develop a knowledge distillation based learning mechanism, enabling KDDE to
optimize a single objective wherein the source and target domains are equally
treated. Extensive experiments on two major benchmarks, i.e., Office-Home and
DomainNet, show that KDDE compares favorably against four competitive
baselines, i.e., DDC, DANN, DAAN, and CDAN, for both UDA and UDE tasks. Our
study also reveals that the current UDA models improve their performance on the
target domain at the cost of noticeable performance loss on the source domain.
Related papers
- Style Adaptation for Domain-adaptive Semantic Segmentation [2.1365683052370046]
Domain discrepancy leads to a significant decrease in the performance of general network models trained on the source domain data when applied to the target domain.
We introduce a straightforward approach to mitigate the domain discrepancy, which necessitates no additional parameter calculations and seamlessly integrates with self-training-based UDA methods.
Our proposed method attains a noteworthy UDA performance of 76.93 mIoU on the GTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage points over the previous state-of-the-art results.
arXiv Detail & Related papers (2024-04-25T02:51:55Z) - Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Co-Teaching for Unsupervised Domain Adaptation and Expansion [12.455364571022576]
Unsupervised Domain Adaptation (UDA) essentially trades a model's performance on a source domain for improving its performance on a target domain.
UDE tries to adapt the model for the target domain as UDA does, and in the meantime maintains its source-domain performance.
In both UDA and UDE settings, a model tailored to a given domain, let it be the source or the target domain, is assumed to well handle samples from the given domain.
We exploit this finding, and accordingly propose Co-Teaching (CT)
arXiv Detail & Related papers (2022-04-04T02:34:26Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.