Hard-aware Instance Adaptive Self-training for Unsupervised Cross-domain
Semantic Segmentation
- URL: http://arxiv.org/abs/2302.06992v1
- Date: Tue, 14 Feb 2023 11:52:26 GMT
- Title: Hard-aware Instance Adaptive Self-training for Unsupervised Cross-domain
Semantic Segmentation
- Authors: Chuang Zhu, Kebin Liu, Wenqi Tang, Ke Mei, Jiaqi Zou, Tiejun Huang
- Abstract summary: We propose a hard-aware instance adaptive self-training framework for UDA on the task of semantic segmentation.
We develop a novel pseudo-label generation strategy with an instance adaptive selector.
Experiments on GTA5 to Cityscapes, SYNTHIA to Cityscapes, and Cityscapes to Oxford RobotCar demonstrate the superior performance of our approach.
- Score: 18.807921765977415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The divergence between labeled training data and unlabeled testing data is a
significant challenge for recent deep learning models. Unsupervised domain
adaptation (UDA) attempts to solve such problem. Recent works show that
self-training is a powerful approach to UDA. However, existing methods have
difficulty in balancing the scalability and performance. In this paper, we
propose a hard-aware instance adaptive self-training framework for UDA on the
task of semantic segmentation. To effectively improve the quality and diversity
of pseudo-labels, we develop a novel pseudo-label generation strategy with an
instance adaptive selector. We further enrich the hard class pseudo-labels with
inter-image information through a skillfully designed hard-aware pseudo-label
augmentation. Besides, we propose the region-adaptive regularization to smooth
the pseudo-label region and sharpen the non-pseudo-label region. For the
non-pseudo-label region, consistency constraint is also constructed to
introduce stronger supervision signals during model optimization. Our method is
so concise and efficient that it is easy to be generalized to other UDA
methods. Experiments on GTA5 to Cityscapes, SYNTHIA to Cityscapes, and
Cityscapes to Oxford RobotCar demonstrate the superior performance of our
approach compared with the state-of-the-art methods.
Related papers
- Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - SAM4UDASS: When SAM Meets Unsupervised Domain Adaptive Semantic
Segmentation in Intelligent Vehicles [27.405213492173186]
We introduce SAM4UDASS, a novel approach that incorporates the Segment Anything Model (SAM) into self-training UDA methods for refining pseudo-labels.
It involves Semantic-Guided Mask Labeling, which assigns semantic labels to unlabeled SAM masks using UDA pseudo-labels.
It brings more than 3% mIoU gains on GTA5-to-Cityscapes, SYNTHIA-to-Cityscapes, and Cityscapes-to-ACDC when using DAFormer and achieves SOTA when using MIC.
arXiv Detail & Related papers (2023-11-22T08:29:45Z) - Unsupervised Domain Adaptation for Semantic Segmentation with Pseudo
Label Self-Refinement [9.69089112870202]
We propose an auxiliary pseudo-label refinement network (PRN) for online refining of the pseudo labels and also localizing the pixels whose predicted labels are likely to be noisy.
We evaluate our approach on benchmark datasets with three different domain shifts, and our approach consistently performs significantly better than the previous state-of-the-art methods.
arXiv Detail & Related papers (2023-10-25T20:31:07Z) - Unsupervised Domain Adaptive Salient Object Detection Through
Uncertainty-Aware Pseudo-Label Learning [104.00026716576546]
We propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations.
We show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets.
arXiv Detail & Related papers (2022-02-26T16:03:55Z) - STRUDEL: Self-Training with Uncertainty Dependent Label Refinement
across Domains [4.812718493682454]
We propose an unsupervised domain adaptation (UDA) approach for white matter hyperintensity (WMH) segmentation.
We propose to predict the uncertainty of pseudo labels and integrate it in the training process with an uncertainty-guided loss function to highlight labels with high certainty.
Our results on WMH segmentation across datasets demonstrate the significant improvement of STRUDEL with respect to standard self-training.
arXiv Detail & Related papers (2021-04-23T13:46:26Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z) - Instance Adaptive Self-Training for Unsupervised Domain Adaptation [19.44504738538047]
We propose an instance adaptive self-training framework for UDA on the task of semantic segmentation.
To effectively improve the quality of pseudo-labels, we develop a novel pseudo-label generation strategy with an instance adaptive selector.
Our method is so concise and efficient that it is easy to be generalized to other unsupervised domain adaptation methods.
arXiv Detail & Related papers (2020-08-27T15:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.