ST++: Make Self-training Work Better for Semi-supervised Semantic
Segmentation
- URL: http://arxiv.org/abs/2106.05095v1
- Date: Wed, 9 Jun 2021 14:18:32 GMT
- Title: ST++: Make Self-training Work Better for Semi-supervised Semantic
Segmentation
- Authors: Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao
- Abstract summary: We investigate if we could make the self-training -- a simple but popular framework -- work better for semi-supervised segmentation.
We propose an advanced self-training framework (namely ST++) that performs selective re-training via selecting and prioritizing the more reliable unlabeled images.
As a result, the proposed ST++ boosts the performance of semi-supervised model significantly and surpasses existing methods by a large margin on the Pascal VOC 2012 and Cityscapes benchmark.
- Score: 23.207191521477654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate if we could make the self-training -- a simple
but popular framework -- work better for semi-supervised segmentation. Since
the core issue in semi-supervised setting lies in effective and efficient
utilization of unlabeled data, we notice that increasing the diversity and
hardness of unlabeled data is crucial to performance improvement. Being aware
of this fact, we propose to adopt the most plain self-training scheme coupled
with appropriate strong data augmentations on unlabeled data (namely ST) for
this task, which surprisingly outperforms previous methods under various
settings without any bells and whistles. Moreover, to alleviate the negative
impact of the wrongly pseudo labeled images, we further propose an advanced
self-training framework (namely ST++), that performs selective re-training via
selecting and prioritizing the more reliable unlabeled images. As a result, the
proposed ST++ boosts the performance of semi-supervised model significantly and
surpasses existing methods by a large margin on the Pascal VOC 2012 and
Cityscapes benchmark. Overall, we hope this straightforward and simple
framework will serve as a strong baseline or competitor for future works. Code
is available at https://github.com/LiheYoung/ST-PlusPlus.
Related papers
- Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Improving Semi-Supervised Semantic Segmentation with Dual-Level Siamese Structure Network [7.438140196173472]
We propose a dual-level Siamese structure network (DSSN) for pixel-wise contrastive learning.
We introduce a novel class-aware pseudo-label selection strategy for weak-to-strong supervision.
Our proposed method achieves state-of-the-art results on two datasets.
arXiv Detail & Related papers (2023-07-26T03:30:28Z) - SEPT: Towards Scalable and Efficient Visual Pre-Training [11.345844145289524]
Self-supervised pre-training has shown great potential in leveraging large-scale unlabeled data to improve downstream task performance.
We build a task-specific self-supervised pre-training framework based on a simple hypothesis that pre-training on the unlabeled samples with similar distribution to the target task can bring substantial performance gains.
arXiv Detail & Related papers (2022-12-11T11:02:11Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.