Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation
- URL: http://arxiv.org/abs/2208.11499v3
- Date: Wed, 30 Aug 2023 06:57:57 GMT
- Title: Semi-supervised Semantic Segmentation with Mutual Knowledge Distillation
- Authors: Jianlong Yuan, Jinchao Ge, Zhibin Wang, Yifan Liu
- Abstract summary: We propose a new consistency regularization framework, termed mutual knowledge distillation (MKD)
We use the pseudo-labels generated by a mean teacher to supervise the student network to achieve a mutual knowledge distillation between the two branches.
Our framework outperforms previous state-of-the-art (SOTA) methods under various semi-supervised settings.
- Score: 20.741353967123366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistency regularization has been widely studied in recent semisupervised
semantic segmentation methods, and promising performance has been achieved. In
this work, we propose a new consistency regularization framework, termed mutual
knowledge distillation (MKD), combined with data and feature augmentation. We
introduce two auxiliary mean-teacher models based on consistency
regularization. More specifically, we use the pseudo-labels generated by a mean
teacher to supervise the student network to achieve a mutual knowledge
distillation between the two branches. In addition to using image-level strong
and weak augmentation, we also discuss feature augmentation. This involves
considering various sources of knowledge to distill the student network. Thus,
we can significantly increase the diversity of the training samples.
Experiments on public benchmarks show that our framework outperforms previous
state-of-the-art (SOTA) methods under various semi-supervised settings. Code is
available at semi-mmseg.
Related papers
- On Distilling the Displacement Knowledge for Few-Shot Class-Incremental Learning [17.819582979803286]
Few-shot Class-Incremental Learning (FSCIL) addresses the challenges of evolving data distributions and the difficulty of data acquisition in real-world scenarios.
To counteract the catastrophic forgetting typically encountered in FSCIL, knowledge distillation is employed as a way to maintain the knowledge from learned data distribution.
arXiv Detail & Related papers (2024-12-15T02:10:18Z) - PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Knowledge Distillation Meets Open-Set Semi-Supervised Learning [69.21139647218456]
We propose a novel em modelname (bfem shortname) method dedicated for distilling representational knowledge semantically from a pretrained teacher to a target student.
At the problem level, this establishes an interesting connection between knowledge distillation with open-set semi-supervised learning (SSL)
Our shortname outperforms significantly previous state-of-the-art knowledge distillation methods on both coarse object classification and fine face recognition tasks.
arXiv Detail & Related papers (2022-05-13T15:15:27Z) - Weakly Supervised Semantic Segmentation via Alternative Self-Dual
Teaching [82.71578668091914]
This paper establishes a compact learning framework that embeds the classification and mask-refinement components into a unified deep model.
We propose a novel alternative self-dual teaching (ASDT) mechanism to encourage high-quality knowledge interaction.
arXiv Detail & Related papers (2021-12-17T11:56:56Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised
Learning [4.205692673448206]
We propose a novel data augmentation mechanism called ClassMix, which generates augmentations by mixing unlabelled samples.
We evaluate this augmentation technique on two common semi-supervised semantic segmentation benchmarks, showing that it attains state-of-the-art results.
arXiv Detail & Related papers (2020-07-15T18:21:17Z) - Creating Something from Nothing: Unsupervised Knowledge Distillation for
Cross-Modal Hashing [132.22315429623575]
Cross-modal hashing (CMH) can map contents from different modalities, especially in vision and language, into the same space.
There are two main frameworks for CMH, differing from each other in whether semantic supervision is required.
In this paper, we propose a novel approach that enables guiding a supervised method using outputs produced by an unsupervised method.
arXiv Detail & Related papers (2020-04-01T08:32:15Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.