Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation
- URL: http://arxiv.org/abs/2110.04111v1
- Date: Fri, 8 Oct 2021 13:20:09 GMT
- Title: Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation
- Authors: KwanYong Park, Sanghyun Woo, Inkyu Shin, In So Kweon
- Abstract summary: Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
- Score: 91.30558794056056
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) for semantic segmentation has been
attracting attention recently, as it could be beneficial for various
label-scarce real-world scenarios (e.g., robot control, autonomous driving,
medical imaging, etc.). Despite the significant progress in this field, current
works mainly focus on a single-source single-target setting, which cannot
handle more practical settings of multiple targets or even unseen targets. In
this paper, we investigate open compound domain adaptation (OCDA), which deals
with mixed and novel situations at the same time, for semantic segmentation. We
present a novel framework based on three main design principles: discover,
hallucinate, and adapt. The scheme first clusters compound target data based on
style, discovering multiple latent domains (discover). Then, it hallucinates
multiple latent target domains in source by using image-translation
(hallucinate). This step ensures the latent domains in the source and the
target to be paired. Finally, target-to-source alignment is learned separately
between domains (adapt). In high-level, our solution replaces a hard OCDA
problem with much easier multiple UDA problems. We evaluate our solution on
standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
Related papers
- Delving into the Continuous Domain Adaptation [12.906272389564593]
Existing domain adaptation methods assume that domain discrepancies are caused by a few discrete attributes and variations.
We argue that this is not realistic as it is implausible to define the real-world datasets using a few discrete attributes.
We propose to investigate a new problem namely the Continuous Domain Adaptation.
arXiv Detail & Related papers (2022-08-28T02:32:25Z) - Few-shot Unsupervised Domain Adaptation for Multi-modal Cardiac Image
Segmentation [16.94252910722673]
Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by using unlabeled target domain and labeled source domain data.
In this paper, we explore the potential of UDA in a more challenging while realistic scenario where only one unlabeled target patient sample is available.
We first generate target-style images from source images and explore diverse target styles from a single target patient with Random Adaptive Instance Normalization (RAIN)
Then, a segmentation network is trained in a supervised manner with the generated target images.
arXiv Detail & Related papers (2022-01-28T19:28:48Z) - Seeking Similarities over Differences: Similarity-based Domain Alignment
for Adaptive Object Detection [86.98573522894961]
We propose a framework that generalizes the components commonly used by Unsupervised Domain Adaptation (UDA) algorithms for detection.
Specifically, we propose a novel UDA algorithm, ViSGA, that leverages the best design choices and introduces a simple but effective method to aggregate features at instance-level.
We show that both similarity-based grouping and adversarial training allows our model to focus on coarsely aligning feature groups, without being forced to match all instances across loosely aligned domains.
arXiv Detail & Related papers (2021-10-04T13:09:56Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.