A Survey of Unsupervised Domain Adaptation for Visual Recognition
- URL: http://arxiv.org/abs/2112.06745v1
- Date: Mon, 13 Dec 2021 15:55:23 GMT
- Title: A Survey of Unsupervised Domain Adaptation for Visual Recognition
- Authors: Youshan Zhang
- Abstract summary: Domain Adaptation (DA) aims to mitigate the domain shift problem when transferring knowledge from one domain to another.
Unsupervised DA (UDA) deals with a labeled source domain and an unlabeled target domain.
- Score: 2.8935588665357077
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: While huge volumes of unlabeled data are generated and made available in many
domains, the demand for automated understanding of visual data is higher than
ever before. Most existing machine learning models typically rely on massive
amounts of labeled training data to achieve high performance. Unfortunately,
such a requirement cannot be met in real-world applications. The number of
labels is limited and manually annotating data is expensive and time-consuming.
It is often necessary to transfer knowledge from an existing labeled domain to
a new domain. However, model performance degrades because of the differences
between domains (domain shift or dataset bias). To overcome the burden of
annotation, Domain Adaptation (DA) aims to mitigate the domain shift problem
when transferring knowledge from one domain into another similar but different
domain. Unsupervised DA (UDA) deals with a labeled source domain and an
unlabeled target domain. The principal objective of UDA is to reduce the domain
discrepancy between the labeled source data and unlabeled target data and to
learn domain-invariant representations across the two domains during training.
In this paper, we first define UDA problem. Secondly, we overview the
state-of-the-art methods for different categories of UDA from both traditional
methods and deep learning based methods. Finally, we collect frequently used
benchmark datasets and report results of the state-of-the-art methods of UDA on
visual recognition problem.
Related papers
- More is Better: Deep Domain Adaptation with Multiple Sources [34.26271755493111]
Multi-source domain adaptation (MDA) is a powerful and practical extension in which the labeled data may be collected from multiple sources with different distributions.
In this survey, we first define various MDA strategies. Then we systematically summarize and compare modern MDA methods in the deep learning era from different perspectives.
arXiv Detail & Related papers (2024-05-01T03:37:12Z) - Multi-Source Domain Adaptation for Object Detection with Prototype-based Mean-teacher [11.616494893839757]
Adapting visual object detectors to operational target domains is a challenging task, commonly achieved using unsupervised domain adaptation (UDA) methods.
Recent studies have shown that when the labeled dataset comes from multiple source domains, treating them as separate domains improves the accuracy and robustness over blending these source domains and performing a UDA.
This paper proposes a novel MSDA method called Prototype-based Mean Teacher (PMT), which uses class prototypes instead of domain-specifics to encode domain-specific information.
arXiv Detail & Related papers (2023-09-26T14:08:03Z) - Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation [88.5448806952394]
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain and unlabeled data from a target domain are used to learn a classifier for the target domain.
We show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods.
arXiv Detail & Related papers (2022-04-01T16:56:26Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.