Label-Driven Reconstruction for Domain Adaptation in Semantic
Segmentation
- URL: http://arxiv.org/abs/2003.04614v3
- Date: Sun, 23 Aug 2020 16:23:23 GMT
- Title: Label-Driven Reconstruction for Domain Adaptation in Semantic
Segmentation
- Authors: Jinyu Yang, Weizhi An, Sheng Wang, Xinliang Zhu, Chaochao Yan, Junzhou
Huang
- Abstract summary: Unsupervised domain adaptation enables to alleviate the need for pixel-wise annotation in the semantic segmentation.
One of the most common strategies is to translate images from the source domain to the target domain and then align their marginal distributions in the feature space using adversarial learning.
Here, we present an innovative framework, designed to mitigate the image translation bias and align cross-domain features with the same category.
- Score: 43.09068177612067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation enables to alleviate the need for pixel-wise
annotation in the semantic segmentation. One of the most common strategies is
to translate images from the source domain to the target domain and then align
their marginal distributions in the feature space using adversarial learning.
However, source-to-target translation enlarges the bias in translated images
and introduces extra computations, owing to the dominant data size of the
source domain. Furthermore, consistency of the joint distribution in source and
target domains cannot be guaranteed through global feature alignment. Here, we
present an innovative framework, designed to mitigate the image translation
bias and align cross-domain features with the same category. This is achieved
by 1) performing the target-to-source translation and 2) reconstructing both
source and target images from their predicted labels. Extensive experiments on
adapting from synthetic to real urban scene understanding demonstrate that our
framework competes favorably against existing state-of-the-art methods.
Related papers
- Diffusion-based Image Translation with Label Guidance for Domain
Adaptive Semantic Segmentation [35.44771460784343]
Translating images from a source domain to a target domain for learning target models is one of the most common strategies in domain adaptive semantic segmentation (DASS)
Existing methods still struggle to preserve semantically-consistent local details between the original and translated images.
We present an innovative approach that addresses this challenge by using source-domain labels as explicit guidance during image translation.
arXiv Detail & Related papers (2023-08-23T18:01:01Z) - Conditional Score Guidance for Text-Driven Image-to-Image Translation [52.73564644268749]
We present a novel algorithm for text-driven image-to-image translation based on a pretrained text-to-image diffusion model.
Our method aims to generate a target image by selectively editing the regions of interest in a source image.
arXiv Detail & Related papers (2023-05-29T10:48:34Z) - I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic
Segmentation [55.633859439375044]
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work.
Key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly.
This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation.
arXiv Detail & Related papers (2023-01-03T15:19:48Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Global and Local Alignment Networks for Unpaired Image-to-Image
Translation [170.08142745705575]
The goal of unpaired image-to-image translation is to produce an output image reflecting the target domain's style.
Due to the lack of attention to the content change in existing methods, semantic information from source images suffers from degradation during translation.
We introduce a novel approach, Global and Local Alignment Networks (GLA-Net)
Our method effectively generates sharper and more realistic images than existing approaches.
arXiv Detail & Related papers (2021-11-19T18:01:54Z) - Deep Symmetric Adaptation Network for Cross-modality Medical Image
Segmentation [40.95845629932874]
Unsupervised domain adaptation (UDA) methods have shown their promising performance in the cross-modality medical image segmentation tasks.
We present a novel deep symmetric architecture of UDA for medical image segmentation, which consists of a segmentation sub-network and two symmetric source and target domain translation sub-networks.
Our method has remarkable advantages compared to the state-of-the-art methods in both cross-modality Cardiac and BraTS segmentation tasks.
arXiv Detail & Related papers (2021-01-18T02:54:30Z) - Consistency Regularization with High-dimensional Non-adversarial
Source-guided Perturbation for Unsupervised Domain Adaptation in Segmentation [15.428323201750144]
BiSIDA employs consistency regularization to efficiently exploit information from the unlabeled target dataset.
BiSIDA achieves new state-of-the-art on two commonly-used synthetic-to-real domain adaptation benchmarks.
arXiv Detail & Related papers (2020-09-18T03:26:44Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.