Seminar Learning for Click-Level Weakly Supervised Semantic Segmentation
- URL: http://arxiv.org/abs/2108.13393v1
- Date: Mon, 30 Aug 2021 17:27:43 GMT
- Title: Seminar Learning for Click-Level Weakly Supervised Semantic Segmentation
- Authors: Hongjun Chen, Jinbao Wang, Hong Cai Chen, Xiantong Zhen, Feng Zheng,
Rongrong Ji, Ling Shao
- Abstract summary: We propose seminar learning, a new learning paradigm for semantic segmentation with click-level supervision.
The rationale of seminar learning is to leverage the knowledge from different networks to compensate for insufficient information provided in click-level annotations.
Experimental results demonstrate the effectiveness of seminar learning, which achieves the new state-of-the-art performance of 72.51%.
- Score: 149.9226057885554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Annotation burden has become one of the biggest barriers to semantic
segmentation. Approaches based on click-level annotations have therefore
attracted increasing attention due to their superior trade-off between
supervision and annotation cost. In this paper, we propose seminar learning, a
new learning paradigm for semantic segmentation with click-level supervision.
The fundamental rationale of seminar learning is to leverage the knowledge from
different networks to compensate for insufficient information provided in
click-level annotations. Mimicking a seminar, our seminar learning involves a
teacher-student and a student-student module, where a student can learn from
both skillful teachers and other students. The teacher-student module uses a
teacher network based on the exponential moving average to guide the training
of the student network. In the student-student module, heterogeneous
pseudo-labels are proposed to bridge the transfer of knowledge among students
to enhance each other's performance. Experimental results demonstrate the
effectiveness of seminar learning, which achieves the new state-of-the-art
performance of 72.51% (mIOU), surpassing previous methods by a large margin of
up to 16.88% on the Pascal VOC 2012 dataset.
Related papers
- Exploiting Minority Pseudo-Labels for Semi-Supervised Semantic Segmentation in Autonomous Driving [2.638145329894673]
We propose a professional training module to enhance minority class learning and a general training module to learn more comprehensive semantic information.
In experiments, our framework demonstrates superior performance compared to state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2024-09-19T11:47:25Z) - I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation [1.433758865948252]
This paper proposes a new knowledge distillation method tailored for image semantic segmentation, termed Intra- and Inter-Class Knowledge Distillation (I2CKD)
The focus of this method is on capturing and transferring knowledge between the intermediate layers of teacher (cumbersome model) and student (compact model)
arXiv Detail & Related papers (2024-03-27T12:05:22Z) - Competitive Ensembling Teacher-Student Framework for Semi-Supervised
Left Atrium MRI Segmentation [8.338801567668233]
Semi-supervised learning has greatly advanced medical image segmentation since it effectively alleviates the need of acquiring abundant annotations from experts.
In this paper, we present a simple yet efficient competitive ensembling teacher student framework for semi-supervised for left atrium segmentation from 3D MR images.
arXiv Detail & Related papers (2023-10-21T09:23:34Z) - Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant [72.4512562104361]
We argue that the unlabeled data with pseudo labels can facilitate the learning of representative features in the feature extractor.
Motivated by this consideration, we propose a novel framework, Gentle Teaching Assistant (GTA-Seg) to disentangle the effects of pseudo labels on feature extractor and mask predictor.
arXiv Detail & Related papers (2023-01-18T07:11:24Z) - Semi-Supervised Semantic Segmentation with Cross Teacher Training [14.015346488847902]
This work proposes a cross-teacher training framework with three modules that significantly improves traditional semi-supervised learning approaches.
The core is a cross-teacher module, which could simultaneously reduce the coupling among peer networks and the error accumulation between teacher and student networks.
The high-level module can transfer high-quality knowledge from labeled data to unlabeled ones and promote separation between classes in feature space.
The low-level module can encourage low-quality features learning from the high-quality features among peer networks.
arXiv Detail & Related papers (2022-09-03T05:02:03Z) - Self-Supervised Learning for speech recognition with Intermediate layer
supervision [52.93758711230248]
We propose Intermediate Layer Supervision for Self-Supervised Learning (ILS-SSL)
ILS-SSL forces the model to concentrate on content information as much as possible by adding an additional SSL loss on the intermediate layers.
Experiments on LibriSpeech test-other set show that our method outperforms HuBERT significantly.
arXiv Detail & Related papers (2021-12-16T10:45:05Z) - Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For
Model Compression [2.538209532048867]
Mutual Learning (ML) provides an alternative strategy where multiple simple student networks benefit from sharing knowledge.
We propose a single-teacher, multi-student framework that leverages both KD and ML to achieve better performance.
arXiv Detail & Related papers (2021-10-21T09:59:31Z) - Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A
Stackelberg Game Approach [54.28419430315478]
Mobile Edge Learning enables distributed training of Machine Learning models over heterogeneous edge devices.
In MEL, the training performance deteriorates without the availability of sufficient training data or computing resources.
We propose an incentive mechanism, where we formulate the orchestrators-learners interactions as a 2-round Stackelberg game.
arXiv Detail & Related papers (2021-09-25T17:27:48Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Interactive Knowledge Distillation [79.12866404907506]
We propose an InterActive Knowledge Distillation scheme to leverage the interactive teaching strategy for efficient knowledge distillation.
In the distillation process, the interaction between teacher and student networks is implemented by a swapping-in operation.
Experiments with typical settings of teacher-student networks demonstrate that the student networks trained by our IAKD achieve better performance than those trained by conventional knowledge distillation methods.
arXiv Detail & Related papers (2020-07-03T03:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.