Using Unreliable Pseudo-Labels for Label-Efficient Semantic Segmentation
- URL: http://arxiv.org/abs/2306.02314v2
- Date: Tue, 20 Aug 2024 14:30:35 GMT
- Title: Using Unreliable Pseudo-Labels for Label-Efficient Semantic Segmentation
- Authors: Haochen Wang, Yuchao Wang, Yujun Shen, Junsong Fan, Yuxi Wang, Zhaoxiang Zhang,
- Abstract summary: We argue that every pixel matters to the model training, even those unreliable and ambiguous pixels.
We separate reliable and unreliable pixels via the entropy of predictions, push each unreliable pixel to a category-wise queue that consists of negative keys.
Considering the training evolution, we adaptively adjust the threshold for the reliable-unreliable partition.
- Score: 78.56076985502291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The crux of label-efficient semantic segmentation is to produce high-quality pseudo-labels to leverage a large amount of unlabeled or weakly labeled data. A common practice is to select the highly confident predictions as the pseudo-ground-truths for each pixel, but it leads to a problem that most pixels may be left unused due to their unreliability. However, we argue that every pixel matters to the model training, even those unreliable and ambiguous pixels. Intuitively, an unreliable prediction may get confused among the top classes, however, it should be confident about the pixel not belonging to the remaining classes. Hence, such a pixel can be convincingly treated as a negative key to those most unlikely categories. Therefore, we develop an effective pipeline to make sufficient use of unlabeled data. Concretely, we separate reliable and unreliable pixels via the entropy of predictions, push each unreliable pixel to a category-wise queue that consists of negative keys, and manage to train the model with all candidate pixels. Considering the training evolution, we adaptively adjust the threshold for the reliable-unreliable partition. Experimental results on various benchmarks and training settings demonstrate the superiority of our approach over the state-of-the-art alternatives.
Related papers
- Weighting Pseudo-Labels via High-Activation Feature Index Similarity and Object Detection for Semi-Supervised Segmentation [33.384621509857524]
Semi-supervised semantic segmentation methods leverage unlabeled data by pseudo-labeling them.
Existing methods mostly choose high-confidence pixels in an effort to avoid erroneous pseudo-labels.
We propose a novel approach to reliably learn from pseudo-labels.
arXiv Detail & Related papers (2024-07-17T14:58:04Z) - Semi-supervised Counting via Pixel-by-pixel Density Distribution
Modelling [135.66138766927716]
This paper focuses on semi-supervised crowd counting, where only a small portion of the training data are labeled.
We formulate the pixel-wise density value to regress as a probability distribution, instead of a single deterministic value.
Our method clearly outperforms the competitors by a large margin under various labeled ratio settings.
arXiv Detail & Related papers (2024-02-23T12:48:02Z) - SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic
Segmentation [52.62441404064957]
Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the model trained on a labeled source domain.
Many methods tend to alleviate noisy pseudo labels, however, they ignore intrinsic connections among cross-domain pixels with similar semantic concepts.
We propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixel.
arXiv Detail & Related papers (2022-04-19T11:16:29Z) - Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels [29.32275289325213]
We argue that every pixel matters to the model training, even its prediction is ambiguous.
We separate reliable and unreliable pixels via the entropy of predictions, push each unreliable pixel to a category-wise queue that consists of negative samples, and manage to train the model with all candidate pixels.
arXiv Detail & Related papers (2022-03-08T07:16:23Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - Pixel-by-Pixel Cross-Domain Alignment for Few-Shot Semantic Segmentation [16.950853152484203]
We consider the task of semantic segmentation in autonomous driving applications.
In this context, aligning the domains is made more challenging by the pixel-wise class imbalance.
We propose a novel framework called Pixel-By-Pixel Cross-Domain Alignment (PixDA)
arXiv Detail & Related papers (2021-10-22T08:27:17Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - KRADA: Known-region-aware Domain Alignment for Open World Semantic
Segmentation [64.03817806316903]
In semantic segmentation, we aim to train a pixel-level classifier to assign category labels to all pixels in an image.
In an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.
We propose an end-to-end learning framework, known-region-aware domain alignment (KRADA), to distinguish unknown classes while aligning distributions of known classes in labeled and unlabeled open-world images.
arXiv Detail & Related papers (2021-06-11T08:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.