QuadFormer: Quadruple Transformer for Unsupervised Domain Adaptation in
Power Line Segmentation of Aerial Images
- URL: http://arxiv.org/abs/2211.16988v1
- Date: Tue, 29 Nov 2022 03:15:27 GMT
- Title: QuadFormer: Quadruple Transformer for Unsupervised Domain Adaptation in
Power Line Segmentation of Aerial Images
- Authors: Pratyaksh Prabhav Rao, Feng Qiao, Weide Zhang, Yiliang Xu, Yong Deng,
Guangbin Wu, Qiang Zhang
- Abstract summary: We propose a novel framework designed for domain adaptive semantic segmentation.
The hierarchical quadruple transformer combines cross-attention and self-attention mechanisms to adapt transferable context.
We present two datasets - ARPLSyn and ARPLReal - to further advance research in unsupervised domain adaptive powerline segmentation.
- Score: 12.840195641761323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of power lines in aerial images is essential to ensure
the flight safety of aerial vehicles. Acquiring high-quality ground truth
annotations for training a deep learning model is a laborious process.
Therefore, developing algorithms that can leverage knowledge from labelled
synthetic data to unlabelled real images is highly demanded. This process is
studied in Unsupervised domain adaptation (UDA). Recent approaches to
self-training have achieved remarkable performance in UDA for semantic
segmentation, which trains a model with pseudo labels on the target domain.
However, the pseudo labels are noisy due to a discrepancy in the two data
distributions. We identify that context dependency is important for bridging
this domain gap. Motivated by this, we propose QuadFormer, a novel framework
designed for domain adaptive semantic segmentation. The hierarchical quadruple
transformer combines cross-attention and self-attention mechanisms to adapt
transferable context. Based on cross-attentive and self-attentive feature
representations, we introduce a pseudo label correction scheme to online
denoise the pseudo labels and reduce the domain gap. Additionally, we present
two datasets - ARPLSyn and ARPLReal to further advance research in unsupervised
domain adaptive powerline segmentation. Finally, experimental results indicate
that our method achieves state-of-the-art performance for the domain adaptive
power line segmentation on ARPLSyn$\rightarrow$TTTPLA and
ARPLSyn$\rightarrow$ARPLReal.
Related papers
- AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation [10.958821619282748]
We present an unsupervised domain adaptation (UDA) method named AdaptDiff.
It enables a retinal vessel segmentation network trained on fundus photography (FP) to produce satisfactory results on unseen modalities.
Our results demonstrate a significant improvement in segmentation performance across all unseen datasets.
arXiv Detail & Related papers (2024-10-06T23:04:29Z) - FPL+: Filtered Pseudo Label-based Unsupervised Cross-Modality Adaptation for 3D Medical Image Segmentation [14.925162565630185]
We propose an enhanced Filtered Pseudo Label (FPL+)-based Unsupervised Domain Adaptation (UDA) method for 3D medical image segmentation.
It first uses cross-domain data augmentation to translate labeled images in the source domain to a dual-domain training set consisting of a pseudo source-domain set and a pseudo target-domain set.
We then combine labeled source-domain images and target-domain images with pseudo labels to train a final segmentor, where image-level weighting based on uncertainty estimation and pixel-level weighting based on dual-domain consensus are proposed to mitigate the adverse effect of noisy pseudo
arXiv Detail & Related papers (2024-04-07T14:21:37Z) - Threshold-adaptive Unsupervised Focal Loss for Domain Adaptation of
Semantic Segmentation [25.626882426111198]
Unsupervised domain adaptation (UDA) for semantic segmentation has recently gained increasing research attention.
In this paper, we propose a novel two-stage entropy-based UDA method for semantic segmentation.
Our method achieves state-of-the-art 58.4% and 59.6% mIoUs on SYNTHIA-to-Cityscapes and GTA5-to-Cityscapes using DeepLabV2 and competitive performance using the lightweight BiSeNet.
arXiv Detail & Related papers (2022-08-23T03:48:48Z) - DecoupleNet: Decoupled Network for Domain Adaptive Semantic Segmentation [78.30720731968135]
Unsupervised domain adaptation in semantic segmentation has been raised to alleviate the reliance on expensive pixel-wise annotations.
We propose DecoupleNet that alleviates source domain overfitting and enables the final model to focus more on the segmentation task.
We also put forward Self-Discrimination (SD) and introduce an auxiliary classifier to learn more discriminative target domain features with pseudo labels.
arXiv Detail & Related papers (2022-07-20T15:47:34Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Frequency Spectrum Augmentation Consistency for Domain Adaptive Object
Detection [107.52026281057343]
We introduce a Frequency Spectrum Augmentation Consistency (FSAC) framework with four different low-frequency filter operations.
In the first stage, we utilize all the original and augmented source data to train an object detector.
In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.
arXiv Detail & Related papers (2021-12-16T04:07:01Z) - CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [44.06904757181245]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain.
One fundamental problem for the category level based UDA is the production of pseudo labels for samples in target domain.
We design a two-way center-aware labeling algorithm to produce pseudo labels for target samples.
Along with the pseudo labels, a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-attention for source/target feature learning and source-target domain alignment.
arXiv Detail & Related papers (2021-09-13T17:59:07Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.