ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in
Semantic Segmentation
- URL: http://arxiv.org/abs/2006.08658v1
- Date: Mon, 15 Jun 2020 18:10:09 GMT
- Title: ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in
Semantic Segmentation
- Authors: Antoine Saporta, Tuan-Hung Vu, Matthieu Cord, Patrick P\'erez
- Abstract summary: We propose Entropy-guided Self-supervised Learning, leveraging entropy as the confidence indicator for producing more accurate pseudo-labels.
On different UDA benchmarks, ESL consistently outperforms strong SSL baselines and achieves state-of-the-art results.
- Score: 35.03150829133562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While fully-supervised deep learning yields good models for urban scene
semantic segmentation, these models struggle to generalize to new environments
with different lighting or weather conditions for instance. In addition,
producing the extensive pixel-level annotations that the task requires comes at
a great cost. Unsupervised domain adaptation (UDA) is one approach that tries
to address these issues in order to make such systems more scalable. In
particular, self-supervised learning (SSL) has recently become an effective
strategy for UDA in semantic segmentation. At the core of such methods lies
`pseudo-labeling', that is, the practice of assigning high-confident class
predictions as pseudo-labels, subsequently used as true labels, for target
data. To collect pseudo-labels, previous works often rely on the highest
softmax score, which we here argue as an unfavorable confidence measurement.
In this work, we propose Entropy-guided Self-supervised Learning (ESL),
leveraging entropy as the confidence indicator for producing more accurate
pseudo-labels. On different UDA benchmarks, ESL consistently outperforms strong
SSL baselines and achieves state-of-the-art results.
Related papers
- Pseudo-label Refinement for Improving Self-Supervised Learning Systems [22.276126184466207]
Self-supervised learning systems use clustering-based pseudo-labels to provide supervision without the need for human annotations.
The noise in these pseudo-labels caused by the clustering methods poses a challenge to the learning process leading to degraded performance.
We propose a pseudo-label refinement algorithm to address this issue.
arXiv Detail & Related papers (2024-10-18T07:47:59Z) - Co-training for Low Resource Scientific Natural Language Inference [65.37685198688538]
We propose a novel co-training method that assigns weights based on the training dynamics of the classifiers to the distantly supervised labels.
By assigning importance weights instead of filtering out examples based on an arbitrary threshold on the predicted confidence, we maximize the usage of automatically labeled data.
The proposed method obtains an improvement of 1.5% in Macro F1 over the distant supervision baseline, and substantial improvements over several other strong SSL baselines.
arXiv Detail & Related papers (2024-06-20T18:35:47Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Unsupervised Domain Adaptation for Semantic Segmentation with Pseudo
Label Self-Refinement [9.69089112870202]
We propose an auxiliary pseudo-label refinement network (PRN) for online refining of the pseudo labels and also localizing the pixels whose predicted labels are likely to be noisy.
We evaluate our approach on benchmark datasets with three different domain shifts, and our approach consistently performs significantly better than the previous state-of-the-art methods.
arXiv Detail & Related papers (2023-10-25T20:31:07Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Constraining Pseudo-label in Self-training Unsupervised Domain
Adaptation with Energy-based Model [26.074500538428364]
unsupervised domain adaptation (UDA) is developed to introduce the knowledge in the labeled source domain to the unlabeled target domain.
Recently, deep self-training presents a powerful means for UDA, involving an iterative process of predicting the target domain.
We resort to the energy-based model and constrain the training of the unlabeled target sample with an energy function minimization objective.
arXiv Detail & Related papers (2022-08-26T22:50:23Z) - Complementing Semi-Supervised Learning with Uncertainty Quantification [6.612035830987296]
We propose a novel unsupervised uncertainty-aware objective that relies on aleatoric and epistemic uncertainty quantification.
Our results outperform the state-of-the-art results on complex datasets such as CIFAR-100 and Mini-ImageNet.
arXiv Detail & Related papers (2022-07-22T00:15:02Z) - Unsupervised Domain Adaptive Salient Object Detection Through
Uncertainty-Aware Pseudo-Label Learning [104.00026716576546]
We propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations.
We show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets.
arXiv Detail & Related papers (2022-02-26T16:03:55Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.