A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation
- URL: http://arxiv.org/abs/2104.07256v2
- Date: Mon, 19 Apr 2021 08:11:16 GMT
- Title: A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation
- Authors: Jianlong Yuan, Yifan Liu, Chunhua Shen, Zhibin Wang, Hao Li
- Abstract summary: We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
- Score: 74.8791451327354
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, significant progress has been made on semantic segmentation.
However, the success of supervised semantic segmentation typically relies on a
large amount of labelled data, which is time-consuming and costly to obtain.
Inspired by the success of semi-supervised learning methods in image
classification, here we propose a simple yet effective semi-supervised learning
framework for semantic segmentation. We demonstrate that the devil is in the
details: a set of simple design and training techniques can collectively
improve the performance of semi-supervised semantic segmentation significantly.
Previous works [3, 27] fail to employ strong augmentation in pseudo label
learning efficiently, as the large distribution change caused by strong
augmentation harms the batch normalisation statistics. We design a new batch
normalisation, namely distribution-specific batch normalisation (DSBN) to
address this problem and demonstrate the importance of strong augmentation for
semantic segmentation. Moreover, we design a self correction loss which is
effective in noise resistance. We conduct a series of ablation studies to show
the effectiveness of each component. Our method achieves state-of-the-art
results in the semi-supervised settings on the Cityscapes and Pascal VOC
datasets.
Related papers
- Dense FixMatch: a simple semi-supervised learning method for pixel-wise
prediction tasks [68.36996813591425]
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks.
We enable the application of FixMatch in semi-supervised learning problems beyond image classification by adding a matching operation on the pseudo-labels.
Dense FixMatch significantly improves results compared to supervised learning using only labeled data, approaching its performance with 1/4 of the labeled samples.
arXiv Detail & Related papers (2022-10-18T15:02:51Z) - A Contrastive Distillation Approach for Incremental Semantic
Segmentation in Aerial Images [15.75291664088815]
A major issue concerning current deep neural architectures is known as catastrophic forgetting.
We propose a contrastive regularization, where any given input is compared with its augmented version.
We show the effectiveness of our solution on the Potsdam dataset, outperforming the incremental baseline in every test.
arXiv Detail & Related papers (2021-12-07T16:44:45Z) - Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation [48.294903659573585]
In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-08-03T07:48:33Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - ST++: Make Self-training Work Better for Semi-supervised Semantic
Segmentation [23.207191521477654]
We investigate if we could make the self-training -- a simple but popular framework -- work better for semi-supervised segmentation.
We propose an advanced self-training framework (namely ST++) that performs selective re-training via selecting and prioritizing the more reliable unlabeled images.
As a result, the proposed ST++ boosts the performance of semi-supervised model significantly and surpasses existing methods by a large margin on the Pascal VOC 2012 and Cityscapes benchmark.
arXiv Detail & Related papers (2021-06-09T14:18:32Z) - A Simple but Tough-to-Beat Data Augmentation Approach for Natural
Language Understanding and Generation [53.8171136907856]
We introduce a set of simple yet effective data augmentation strategies dubbed cutoff.
cutoff relies on sampling consistency and thus adds little computational overhead.
cutoff consistently outperforms adversarial training and achieves state-of-the-art results on the IWSLT2014 German-English dataset.
arXiv Detail & Related papers (2020-09-29T07:08:35Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised
Learning [4.205692673448206]
We propose a novel data augmentation mechanism called ClassMix, which generates augmentations by mixing unlabelled samples.
We evaluate this augmentation technique on two common semi-supervised semantic segmentation benchmarks, showing that it attains state-of-the-art results.
arXiv Detail & Related papers (2020-07-15T18:21:17Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.