Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images
- URL: http://arxiv.org/abs/2108.12611v1
- Date: Sat, 28 Aug 2021 09:29:14 GMT
- Title: Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images
- Authors: Lefei Zhang, Meng Lan, Jing Zhang, Dacheng Tao
- Abstract summary: Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
- Score: 93.50240389540252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Road segmentation from remote sensing images is a challenging task with wide
ranges of application potentials. Deep neural networks have advanced this field
by leveraging the power of large-scale labeled data, which, however, are
extremely expensive and time-consuming to acquire. One solution is to use cheap
available data to train a model and deploy it to directly process the data from
a specific application domain. Nevertheless, the well-known domain shift (DS)
issue prevents the trained model from generalizing well on the target domain.
In this paper, we propose a novel stagewise domain adaptation model called
RoadDA to address the DS issue in this field. In the first stage, RoadDA adapts
the target domain features to align with the source ones via generative
adversarial networks (GAN) based inter-domain adaptation. Specifically, a
feature pyramid fusion module is devised to avoid information loss of long and
thin roads and learn discriminative and robust features. Besides, to address
the intra-domain discrepancy in the target domain, in the second stage, we
propose an adversarial self-training method. We generate the pseudo labels of
target domain using the trained generator and divide it to labeled easy split
and unlabeled hard split based on the road confidence scores. The features of
hard split are adapted to align with the easy ones using adversarial learning
and the intra-domain adaptation process is repeated to progressively improve
the segmentation performance. Experiment results on two benchmarks demonstrate
that RoadDA can efficiently reduce the domain gap and outperforms
state-of-the-art methods.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Improve Cross-domain Mixed Sampling with Guidance Training for Adaptive Segmentation [9.875170018805768]
Unsupervised Domain Adaptation (UDA) endeavors to adjust models trained on a source domain to perform well on a target domain without requiring additional annotations.
We propose a novel auxiliary task called Guidance Training.
This task facilitates the effective utilization of cross-domain mixed sampling techniques while mitigating distribution shifts from the real world.
We demonstrate the efficacy of our approach by integrating it with existing methods, consistently improving performance.
arXiv Detail & Related papers (2024-03-22T07:12:48Z) - Threshold-adaptive Unsupervised Focal Loss for Domain Adaptation of
Semantic Segmentation [25.626882426111198]
Unsupervised domain adaptation (UDA) for semantic segmentation has recently gained increasing research attention.
In this paper, we propose a novel two-stage entropy-based UDA method for semantic segmentation.
Our method achieves state-of-the-art 58.4% and 59.6% mIoUs on SYNTHIA-to-Cityscapes and GTA5-to-Cityscapes using DeepLabV2 and competitive performance using the lightweight BiSeNet.
arXiv Detail & Related papers (2022-08-23T03:48:48Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Unsupervised Intra-domain Adaptation for Semantic Segmentation through
Self-Supervision [73.76277367528657]
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation.
To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models.
We propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together.
arXiv Detail & Related papers (2020-04-16T15:24:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.