Dual-level Interaction for Domain Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2307.07972v2
- Date: Thu, 10 Aug 2023 23:40:37 GMT
- Title: Dual-level Interaction for Domain Adaptive Semantic Segmentation
- Authors: Dongyu Yao, Boheng Li
- Abstract summary: We propose a dual-level interaction for domain adaptation (DIDA) in semantic segmentation.
Explicitly, we encourage the different augmented views of the same pixel to have similar class prediction.
Our method outperforms the state-of-the-art by a notable margin, especially on confusing and long-tailed classes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Self-training approach recently secures its position in domain adaptive
semantic segmentation, where a model is trained with target domain
pseudo-labels. Current advances have mitigated noisy pseudo-labels resulting
from the domain gap. However, they still struggle with erroneous pseudo-labels
near the boundaries of the semantic classifier. In this paper, we tackle this
issue by proposing a dual-level interaction for domain adaptation (DIDA) in
semantic segmentation. Explicitly, we encourage the different augmented views
of the same pixel to have not only similar class prediction (semantic-level)
but also akin similarity relationship with respect to other pixels
(instance-level). As it's impossible to keep features of all pixel instances
for a dataset, we, therefore, maintain a labeled instance bank with dynamic
updating strategies to selectively store the informative features of instances.
Further, DIDA performs cross-level interaction with scattering and gathering
techniques to regenerate more reliable pseudo-labels. Our method outperforms
the state-of-the-art by a notable margin, especially on confusing and
long-tailed classes. Code is available at
\href{https://github.com/RainJamesY/DIDA}
Related papers
- Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - Contrast, Stylize and Adapt: Unsupervised Contrastive Learning Framework
for Domain Adaptive Semantic Segmentation [18.843639142342642]
We present CONtrastive FEaTure and pIxel alignment for bridging the domain gap at both the pixel and feature levels.
Our experiments demonstrate that our method outperforms existing state-of-the-art methods using DeepLabV2.
arXiv Detail & Related papers (2023-06-15T12:50:46Z) - SPCL: A New Framework for Domain Adaptive Semantic Segmentation via
Semantic Prototype-based Contrastive Learning [6.705297811617307]
Domain adaptation can help in transferring knowledge from a labeled source domain to an unlabeled target domain.
We propose a novel semantic prototype-based contrastive learning framework for fine-grained class alignment.
Our method is easy to implement and attains superior results compared to state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-24T09:26:07Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Semantic Distribution-aware Contrastive Adaptation for Semantic
Segmentation [50.621269117524925]
Domain adaptive semantic segmentation refers to making predictions on a certain target domain with only annotations of a specific source domain.
We present a semantic distribution-aware contrastive adaptation algorithm that enables pixel-wise representation alignment.
We evaluate SDCA on multiple benchmarks, achieving considerable improvements over existing algorithms.
arXiv Detail & Related papers (2021-05-11T13:21:25Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Affinity Space Adaptation for Semantic Segmentation Across Domains [57.31113934195595]
In this paper, we address the problem of unsupervised domain adaptation (UDA) in semantic segmentation.
Motivated by the fact that source and target domain have invariant semantic structures, we propose to exploit such invariance across domains.
We develop two affinity space adaptation strategies: affinity space cleaning and adversarial affinity space alignment.
arXiv Detail & Related papers (2020-09-26T10:28:11Z) - Domain Adaptive Semantic Segmentation Using Weak Labels [115.16029641181669]
We propose a novel framework for domain adaptation in semantic segmentation with image-level weak labels in the target domain.
We develop a weak-label classification module to enforce the network to attend to certain categories.
In experiments, we show considerable improvements with respect to the existing state-of-the-arts in UDA and present a new benchmark in the WDA setting.
arXiv Detail & Related papers (2020-07-30T01:33:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.