Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant
- URL: http://arxiv.org/abs/2301.07340v1
- Date: Wed, 18 Jan 2023 07:11:24 GMT
- Title: Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant
- Authors: Ying Jin, Jiaqi Wang, and Dahua Lin
- Abstract summary: We argue that the unlabeled data with pseudo labels can facilitate the learning of representative features in the feature extractor.
Motivated by this consideration, we propose a novel framework, Gentle Teaching Assistant (GTA-Seg) to disentangle the effects of pseudo labels on feature extractor and mask predictor.
- Score: 72.4512562104361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-Supervised Semantic Segmentation aims at training the segmentation model
with limited labeled data and a large amount of unlabeled data. To effectively
leverage the unlabeled data, pseudo labeling, along with the teacher-student
framework, is widely adopted in semi-supervised semantic segmentation. Though
proved to be effective, this paradigm suffers from incorrect pseudo labels
which inevitably exist and are taken as auxiliary training data. To alleviate
the negative impact of incorrect pseudo labels, we delve into the current
Semi-Supervised Semantic Segmentation frameworks. We argue that the unlabeled
data with pseudo labels can facilitate the learning of representative features
in the feature extractor, but it is unreliable to supervise the mask predictor.
Motivated by this consideration, we propose a novel framework, Gentle Teaching
Assistant (GTA-Seg) to disentangle the effects of pseudo labels on feature
extractor and mask predictor of the student model. Specifically, in addition to
the original teacher-student framework, our method introduces a teaching
assistant network which directly learns from pseudo labels generated by the
teacher network. The gentle teaching assistant (GTA) is coined gentle since it
only transfers the beneficial feature representation knowledge in the feature
extractor to the student model in an Exponential Moving Average (EMA) manner,
protecting the student model from the negative influences caused by unreliable
pseudo labels in the mask predictor. The student model is also supervised by
reliable labeled data to train an accurate mask predictor, further facilitating
feature representation. Extensive experiment results on benchmark datasets
validate that our method shows competitive performance against previous
methods. Code is available at https://github.com/Jin-Ying/GTA-Seg.
Related papers
- Label Matching Semi-Supervised Object Detection [85.99282969977541]
Semi-supervised object detection has made significant progress with the development of mean teacher driven self-training.
Label mismatch problem is not yet fully explored in the previous works, leading to severe confirmation bias during self-training.
We propose a simple yet effective LabelMatch framework from two different yet complementary perspectives.
arXiv Detail & Related papers (2022-06-14T05:59:41Z) - GuidedMix-Net: Semi-supervised Semantic Segmentation by Using Labeled
Images as Reference [90.5402652758316]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
It uses labeled information to guide the learning of unlabeled instances.
It achieves competitive segmentation accuracy and significantly improves the mIoU by +7$%$ compared to previous approaches.
arXiv Detail & Related papers (2021-12-28T06:48:03Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.