ECAP: Extensive Cut-and-Paste Augmentation for Unsupervised Domain
Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2403.03854v1
- Date: Wed, 6 Mar 2024 17:06:07 GMT
- Title: ECAP: Extensive Cut-and-Paste Augmentation for Unsupervised Domain
Adaptive Semantic Segmentation
- Authors: Erik Brorsson, Knut {\AA}kesson, Lennart Svensson, Kristofer Bengtsson
- Abstract summary: We propose an extensive cut-and-paste strategy (ECAP) to leverage reliable pseudo-labels through data augmentation.
ECAP maintains a memory bank of pseudo-labeled target samples throughout training and cut-and-pastes the most confident ones onto the current training batch.
We implement ECAP on top of the recent method MIC and boost its performance on two synthetic-to-real domain adaptation benchmarks.
- Score: 4.082799056366928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider unsupervised domain adaptation (UDA) for semantic segmentation in
which the model is trained on a labeled source dataset and adapted to an
unlabeled target dataset. Unfortunately, current self-training methods are
susceptible to misclassified pseudo-labels resulting from erroneous
predictions. Since certain classes are typically associated with less reliable
predictions in UDA, reducing the impact of such pseudo-labels without skewing
the training towards some classes is notoriously difficult. To this end, we
propose an extensive cut-and-paste strategy (ECAP) to leverage reliable
pseudo-labels through data augmentation. Specifically, ECAP maintains a memory
bank of pseudo-labeled target samples throughout training and cut-and-pastes
the most confident ones onto the current training batch. We implement ECAP on
top of the recent method MIC and boost its performance on two synthetic-to-real
domain adaptation benchmarks. Notably, MIC+ECAP reaches an unprecedented
performance of 69.1 mIoU on the Synthia->Cityscapes benchmark. Our code is
available at https://github.com/ErikBrorsson/ECAP.
Related papers
- DaMSTF: Domain Adversarial Learning Enhanced Meta Self-Training for
Domain Adaptation [20.697905456202754]
We propose a new self-training framework for domain adaptation, namely Domain adversarial learning enhanced Self-Training Framework (DaMSTF)
DaMSTF involves meta-learning to estimate the importance of each pseudo instance, so as to simultaneously reduce the label noise and preserve hard examples.
DaMSTF improves the performance of BERT with an average of nearly 4%.
arXiv Detail & Related papers (2023-08-05T00:14:49Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - CAFS: Class Adaptive Framework for Semi-Supervised Semantic Segmentation [5.484296906525601]
Semi-supervised semantic segmentation learns a model for classifying pixels into specific classes using a few labeled samples and numerous unlabeled images.
We propose a class-adaptive semisupervision framework for semi-supervised semantic segmentation (CAFS)
CAFS constructs a validation set on a labeled dataset, to leverage the calibration performance for each class.
arXiv Detail & Related papers (2023-03-21T05:56:53Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Constraining Pseudo-label in Self-training Unsupervised Domain
Adaptation with Energy-based Model [26.074500538428364]
unsupervised domain adaptation (UDA) is developed to introduce the knowledge in the labeled source domain to the unlabeled target domain.
Recently, deep self-training presents a powerful means for UDA, involving an iterative process of predicting the target domain.
We resort to the energy-based model and constrain the training of the unlabeled target sample with an energy function minimization objective.
arXiv Detail & Related papers (2022-08-26T22:50:23Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z) - Two-phase Pseudo Label Densification for Self-training based Domain
Adaptation [93.03265290594278]
We propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD.
In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images.
In the second phase, we perform a confidence-based easy-hard classification.
To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss.
arXiv Detail & Related papers (2020-12-09T02:35:25Z) - ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in
Semantic Segmentation [35.03150829133562]
We propose Entropy-guided Self-supervised Learning, leveraging entropy as the confidence indicator for producing more accurate pseudo-labels.
On different UDA benchmarks, ESL consistently outperforms strong SSL baselines and achieves state-of-the-art results.
arXiv Detail & Related papers (2020-06-15T18:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.