Domain Adaptive Semantic Segmentation Using Weak Labels
- URL: http://arxiv.org/abs/2007.15176v2
- Date: Wed, 12 Aug 2020 10:05:48 GMT
- Title: Domain Adaptive Semantic Segmentation Using Weak Labels
- Authors: Sujoy Paul, Yi-Hsuan Tsai, Samuel Schulter, Amit K. Roy-Chowdhury,
Manmohan Chandraker
- Abstract summary: We propose a novel framework for domain adaptation in semantic segmentation with image-level weak labels in the target domain.
We develop a weak-label classification module to enforce the network to attend to certain categories.
In experiments, we show considerable improvements with respect to the existing state-of-the-arts in UDA and present a new benchmark in the WDA setting.
- Score: 115.16029641181669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning semantic segmentation models requires a huge amount of pixel-wise
labeling. However, labeled data may only be available abundantly in a domain
different from the desired target domain, which only has minimal or no
annotations. In this work, we propose a novel framework for domain adaptation
in semantic segmentation with image-level weak labels in the target domain. The
weak labels may be obtained based on a model prediction for unsupervised domain
adaptation (UDA), or from a human annotator in a new weakly-supervised domain
adaptation (WDA) paradigm for semantic segmentation. Using weak labels is both
practical and useful, since (i) collecting image-level target annotations is
comparably cheap in WDA and incurs no cost in UDA, and (ii) it opens the
opportunity for category-wise domain alignment. Our framework uses weak labels
to enable the interplay between feature alignment and pseudo-labeling,
improving both in the process of domain adaptation. Specifically, we develop a
weak-label classification module to enforce the network to attend to certain
categories, and then use such training signals to guide the proposed
category-wise alignment method. In experiments, we show considerable
improvements with respect to the existing state-of-the-arts in UDA and present
a new benchmark in the WDA setting. Project page is at
http://www.nec-labs.com/~mas/WeakSegDA.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Prototypical Contrast Adaptation for Domain Adaptive Semantic
Segmentation [52.63046674453461]
Prototypical Contrast Adaptation (ProCA) is a contrastive learning method for unsupervised domain adaptive semantic segmentation.
ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation.
arXiv Detail & Related papers (2022-07-14T04:54:26Z) - CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement [84.10513481953583]
unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
arXiv Detail & Related papers (2022-05-26T18:45:04Z) - ADeADA: Adaptive Density-aware Active Domain Adaptation for Semantic
Segmentation [23.813813896293876]
We present ADeADA, a general active domain adaptation framework for semantic segmentation.
With less than 5% target domain annotations, our method reaches comparable results with that of full supervision.
arXiv Detail & Related papers (2022-02-14T05:17:38Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Get away from Style: Category-Guided Domain Adaptation for Semantic
Segmentation [15.002381934551359]
Unsupervised domain adaptation (UDA) becomes more and more popular in tackling real-world problems without ground truth of the target domain.
In this paper, we focus on UDA for semantic segmentation task.
We propose a style-independent content feature extraction mechanism to keep the style information of extracted features in the similar space.
Secondly, to keep the balance of pseudo labels on each category, we propose a category-guided threshold mechanism to choose category-wise pseudo labels for self-supervised learning.
arXiv Detail & Related papers (2021-03-29T10:00:50Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.