Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2303.09779v1
- Date: Fri, 17 Mar 2023 05:22:44 GMT
- Title: Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation
- Authors: Daehan Kim, Minseok Seo, Kwanyong Park, Inkyu Shin, Sanghyun Woo,
In-So Kweon, Dong-Geol Choi
- Abstract summary: This paper systematically studies the impact of mixup under the domain adaptaive semantic segmentation task.
In specific, we achieve domain mixup in two-step: cut and paste.
We provide extensive ablation experiments to empirically verify our main components of the framework.
- Score: 73.3083304858763
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mixup provides interpolated training samples and allows the model to obtain
smoother decision boundaries for better generalization. The idea can be
naturally applied to the domain adaptation task, where we can mix the source
and target samples to obtain domain-mixed samples for better adaptation.
However, the extension of the idea from classification to segmentation (i.e.,
structured output) is nontrivial. This paper systematically studies the impact
of mixup under the domain adaptaive semantic segmentation task and presents a
simple yet effective mixup strategy called Bidirectional Domain Mixup (BDM). In
specific, we achieve domain mixup in two-step: cut and paste. Given the warm-up
model trained from any adaptation techniques, we forward the source and target
samples and perform a simple threshold-based cut out of the unconfident regions
(cut). After then, we fill-in the dropped regions with the other domain region
patches (paste). In doing so, we jointly consider class distribution, spatial
structure, and pseudo label confidence. Based on our analysis, we found that
BDM leaves domain transferable regions by cutting, balances the dataset-level
class distribution while preserving natural scene context by pasting. We
coupled our proposal with various state-of-the-art adaptation models and
observe significant improvement consistently. We also provide extensive
ablation experiments to empirically verify our main components of the
framework. Visit our project page with the code at
https://sites.google.com/view/bidirectional-domain-mixup
Related papers
- Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Meta-DMoE: Adapting to Domain Shift by Meta-Distillation from
Mixture-of-Experts [33.21435044949033]
Most existing methods perform training on multiple source domains using a single model.
We propose a novel framework for unsupervised test-time adaptation, which is formulated as a knowledge distillation process.
arXiv Detail & Related papers (2022-10-08T02:28:10Z) - Connecting adversarial attacks and optimal transport for domain
adaptation [116.50515978657002]
In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain.
In our method, we use optimal transport to map target samples to the domain named source fiction.
Our main idea is to generate a source fiction by c-cyclically monotone transformation over the target domain.
arXiv Detail & Related papers (2022-05-30T20:45:55Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.