ADeLA: Automatic Dense Labeling with Attention for Viewpoint Adaptation
in Semantic Segmentation
- URL: http://arxiv.org/abs/2107.14285v1
- Date: Thu, 29 Jul 2021 19:10:18 GMT
- Title: ADeLA: Automatic Dense Labeling with Attention for Viewpoint Adaptation
in Semantic Segmentation
- Authors: Yanchao Yang, Hanxiang Ren, He Wang, Bokui Shen, Qingnan Fan, Youyi
Zheng, C. Karen Liu and Leonidas Guibas
- Abstract summary: We describe an unsupervised domain adaptation method for image content shift caused by viewpoint changes for a semantic segmentation task.
Our method works without aligning any statistics of the images between the two domains.
It utilizes a view transformation network trained only on color images to hallucinate the semantic images for the target.
- Score: 27.69348820877977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe an unsupervised domain adaptation method for image content shift
caused by viewpoint changes for a semantic segmentation task. Most existing
methods perform domain alignment in a shared space and assume that the mapping
from the aligned space to the output is transferable. However, the novel
content induced by viewpoint changes may nullify such a space for effective
alignments, thus resulting in negative adaptation. Our method works without
aligning any statistics of the images between the two domains. Instead, it
utilizes a view transformation network trained only on color images to
hallucinate the semantic images for the target. Despite the lack of
supervision, the view transformation network can still generalize to semantic
images thanks to the inductive bias introduced by the attention mechanism.
Furthermore, to resolve ambiguities in converting the semantic images to
semantic labels, we treat the view transformation network as a functional
representation of an unknown mapping implied by the color images and propose
functional label hallucination to generate pseudo-labels in the target domain.
Our method surpasses baselines built on state-of-the-art correspondence
estimation and view synthesis methods. Moreover, it outperforms the
state-of-the-art unsupervised domain adaptation methods that utilize
self-training and adversarial domain alignment. Our code and dataset will be
made publicly available.
Related papers
- Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Location-Aware Self-Supervised Transformers [74.76585889813207]
We propose to pretrain networks for semantic segmentation by predicting the relative location of image parts.
We control the difficulty of the task by masking a subset of the reference patch features visible to those of the query.
Our experiments show that this location-aware pretraining leads to representations that transfer competitively to several challenging semantic segmentation benchmarks.
arXiv Detail & Related papers (2022-12-05T16:24:29Z) - Semi-supervised domain adaptation with CycleGAN guided by a downstream
task loss [4.941630596191806]
Domain adaptation is of huge interest as labeling is an expensive and error-prone task.
Image-to-image approaches can be used to mitigate the shift in the input.
We propose a "task aware" version of a GAN in an image-to-image domain adaptation approach.
arXiv Detail & Related papers (2022-08-18T13:13:30Z) - DecoupleNet: Decoupled Network for Domain Adaptive Semantic Segmentation [78.30720731968135]
Unsupervised domain adaptation in semantic segmentation has been raised to alleviate the reliance on expensive pixel-wise annotations.
We propose DecoupleNet that alleviates source domain overfitting and enables the final model to focus more on the segmentation task.
We also put forward Self-Discrimination (SD) and introduce an auxiliary classifier to learn more discriminative target domain features with pseudo labels.
arXiv Detail & Related papers (2022-07-20T15:47:34Z) - SGDR: Semantic-guided Disentangled Representation for Unsupervised
Cross-modality Medical Image Segmentation [5.090366802287405]
We propose a novel framework, called semantic-guided disentangled representation (SGDR), to exact semantically meaningful feature for segmentation task.
We validated our method on two public datasets and experiment results show that our approach outperforms the state of the art methods on two evaluation metrics by a significant margin.
arXiv Detail & Related papers (2022-03-26T08:31:00Z) - Semantic Consistency in Image-to-Image Translation for Unsupervised
Domain Adaptation [22.269565708490465]
Unsupervised Domain Adaptation (UDA) aims to adapt models trained on a source domain to a new target domain where no labelled data is available.
We propose a semantically consistent image-to-image translation method in combination with a consistency regularisation method for UDA.
arXiv Detail & Related papers (2021-11-05T14:22:20Z) - Semantically Adaptive Image-to-image Translation for Domain Adaptation
of Semantic Segmentation [1.8275108630751844]
We address the problem of domain adaptation for semantic segmentation of street scenes.
Many state-of-the-art approaches focus on translating the source image while imposing that the result should be semantically consistent with the input.
We advocate that the image semantics can also be exploited to guide the translation algorithm.
arXiv Detail & Related papers (2020-09-02T16:16:50Z) - Cross-domain Correspondence Learning for Exemplar-based Image
Translation [59.35767271091425]
We present a framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain.
The output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar.
We show that our method is superior to state-of-the-art methods in terms of image quality significantly.
arXiv Detail & Related papers (2020-04-12T09:10:57Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z) - Semantic Image Manipulation Using Scene Graphs [105.03614132953285]
We introduce a-semantic scene graph network that does not require direct supervision for constellation changes or image edits.
This makes possible to train the system from existing real-world datasets with no additional annotation effort.
arXiv Detail & Related papers (2020-04-07T20:02:49Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.