Super-Resolution Domain Adaptation Networks for Semantic Segmentation
via Pixel and Output Level Aligning
- URL: http://arxiv.org/abs/2005.06382v4
- Date: Fri, 13 May 2022 09:09:45 GMT
- Title: Super-Resolution Domain Adaptation Networks for Semantic Segmentation
via Pixel and Output Level Aligning
- Authors: Junfeng Wu, Zhenjie Tang, Congan Xu, Enhai Liu, Long Gao, Wenjun Yan
- Abstract summary: This paper designs a novel end-to-end semantic segmentation network, namely Super-Resolution Domain Adaptation Network (SRDA-Net)
SRDA-Net can simultaneously achieve the super-resolution task and the domain adaptation task, thus satisfying the requirement of semantic segmentation for remote sensing images.
Experimental results on two remote sensing datasets with different resolutions demonstrate that SRDA-Net performs favorably against some state-of-the-art methods.
- Score: 4.500622871756055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Unsupervised Domain Adaptation (UDA) has attracted increasing
attention to address the domain shift problem in the semantic segmentation
task. Although previous UDA methods have achieved promising performance, they
still suffer from the distribution gaps between source and target domains,
especially the resolution discrepany in the remote sensing images. To address
this problem, this paper designs a novel end-to-end semantic segmentation
network, namely Super-Resolution Domain Adaptation Network (SRDA-Net). SRDA-Net
can simultaneously achieve the super-resolution task and the domain adaptation
task, thus satisfying the requirement of semantic segmentation for remote
sensing images which usually involve various resolution images. The proposed
SRDA-Net includes three parts: a Super-Resolution and Segmentation (SRS) model
which focuses on recovering high-resolution image and predicting segmentation
map, a Pixel-level Domain Classifier (PDC) for determining which domain the
pixel belongs to, and an Output-space Domain Classifier (ODC) for
distinguishing which domain the pixel contribution is from. By jointly
optimizing SRS with two classifiers, the proposed method can not only
eliminates the resolution difference between source and target domains, but
also improve the performance of the semantic segmentation task. Experimental
results on two remote sensing datasets with different resolutions demonstrate
that SRDA-Net performs favorably against some state-of-the-art methods in terms
of accuracy and visual quality. Code and models are available at
https://github.com/tangzhenjie/SRDA-Net.
Related papers
- Self-Training Guided Disentangled Adaptation for Cross-Domain Remote
Sensing Image Semantic Segmentation [20.07907723950031]
We propose a self-training guided disentangled adaptation network (ST-DASegNet) for cross-domain RS image semantic segmentation task.
We first propose source student backbone and target student backbone to respectively extract the source-style and target-style feature for both source and target images.
We then propose a domain disentangled module to extract the universal feature and purify the distinct feature of source-style and target-style features.
arXiv Detail & Related papers (2023-01-13T13:11:22Z) - I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic
Segmentation [55.633859439375044]
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work.
Key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly.
This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation.
arXiv Detail & Related papers (2023-01-03T15:19:48Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Unsupervised domain adaptation semantic segmentation of high-resolution
remote sensing imagery with invariant domain-level context memory [10.210120085157161]
This study proposes a novel unsupervised domain adaptation semantic segmentation network (MemoryAdaptNet) for the semantic segmentation of HRS imagery.
MemoryAdaptNet constructs an output space adversarial learning scheme to bridge the domain distribution discrepancy between source domain and target domain.
Experiments under three cross-domain tasks indicate that our proposed MemoryAdaptNet is remarkably superior to the state-of-the-art methods.
arXiv Detail & Related papers (2022-08-16T12:35:57Z) - DecoupleNet: Decoupled Network for Domain Adaptive Semantic Segmentation [78.30720731968135]
Unsupervised domain adaptation in semantic segmentation has been raised to alleviate the reliance on expensive pixel-wise annotations.
We propose DecoupleNet that alleviates source domain overfitting and enables the final model to focus more on the segmentation task.
We also put forward Self-Discrimination (SD) and introduce an auxiliary classifier to learn more discriminative target domain features with pseudo labels.
arXiv Detail & Related papers (2022-07-20T15:47:34Z) - AF$_2$: Adaptive Focus Framework for Aerial Imagery Segmentation [86.44683367028914]
Aerial imagery segmentation has some unique challenges, the most critical one among which lies in foreground-background imbalance.
We propose Adaptive Focus Framework (AF$), which adopts a hierarchical segmentation procedure and focuses on adaptively utilizing multi-scale representations.
AF$ has significantly improved the accuracy on three widely used aerial benchmarks, as fast as the mainstream method.
arXiv Detail & Related papers (2022-02-18T10:14:45Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - More Separable and Easier to Segment: A Cluster Alignment Method for
Cross-Domain Semantic Segmentation [41.81843755299211]
We propose a new UDA semantic segmentation approach based on domain assumption closeness to alleviate the above problems.
Specifically, a prototype clustering strategy is applied to cluster pixels with the same semantic, which will better maintain associations among target domain pixels.
Experiments conducted on GTA5 and SYNTHIA proved the effectiveness of our method.
arXiv Detail & Related papers (2021-05-07T10:24:18Z) - Pixel-Level Cycle Association: A New Perspective for Domain Adaptive
Semantic Segmentation [169.82760468633236]
We propose to build the pixel-level cycle association between source and target pixel pairs.
Our method can be trained end-to-end in one stage and introduces no additional parameters.
arXiv Detail & Related papers (2020-10-31T00:11:36Z) - Affinity Space Adaptation for Semantic Segmentation Across Domains [57.31113934195595]
In this paper, we address the problem of unsupervised domain adaptation (UDA) in semantic segmentation.
Motivated by the fact that source and target domain have invariant semantic structures, we propose to exploit such invariance across domains.
We develop two affinity space adaptation strategies: affinity space cleaning and adversarial affinity space alignment.
arXiv Detail & Related papers (2020-09-26T10:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.