On the Efficiency of Subclass Knowledge Distillation in Classification
Tasks
- URL: http://arxiv.org/abs/2109.05587v1
- Date: Sun, 12 Sep 2021 19:04:44 GMT
- Title: On the Efficiency of Subclass Knowledge Distillation in Classification
Tasks
- Authors: Ahmad Sajedi and Konstantinos N. Plataniotis
- Abstract summary: Subclass Knowledge Distillation (SKD) framework is a process of transferring the subclasses' prediction knowledge from a large teacher model into a smaller student one.
The framework is evaluated in clinical application, namely colorectal polyp binary classification.
A lightweight, low complexity student trained with the proposed framework achieves an F1-score of 85.05%, an improvement of 2.14% and 1.49% gain over the student that trains without.
- Score: 33.1278647424578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces a novel knowledge distillation framework for
classification tasks where information on existing subclasses is available and
taken into consideration. In classification tasks with a small number of
classes or binary detection (two classes) the amount of information transferred
from the teacher to the student network is restricted, thus limiting the
utility of knowledge distillation. Performance can be improved by leveraging
information about possible subclasses within the available classes in the
classification task. To that end, we propose the so-called Subclass Knowledge
Distillation (SKD) framework, which is the process of transferring the
subclasses' prediction knowledge from a large teacher model into a smaller
student one. Through SKD, additional meaningful information which is not in the
teacher's class logits but exists in subclasses (e.g., similarities inside
classes) will be conveyed to the student and boost its performance.
Mathematically, we measure how many extra information bits the teacher can
provide for the student via SKD framework. The framework developed is evaluated
in clinical application, namely colorectal polyp binary classification. In this
application, clinician-provided annotations are used to define subclasses based
on the annotation label's variability in a curriculum style of learning. A
lightweight, low complexity student trained with the proposed framework
achieves an F1-score of 85.05%, an improvement of 2.14% and 1.49% gain over the
student that trains without and with conventional knowledge distillation,
respectively. These results show that the extra subclasses' knowledge (i.e.,
0.4656 label bits per training sample in our experiment) can provide more
information about the teacher generalization, and therefore SKD can benefit
from using more information to increase the student performance.
Related papers
- Linear Projections of Teacher Embeddings for Few-Class Distillation [14.99228980898161]
Knowledge Distillation (KD) has emerged as a promising approach for transferring knowledge from a larger, more complex teacher model to a smaller student model.
We introduce a novel method for distilling knowledge from the teacher's model representations, which we term Learning Embedding Linear Projections (LELP)
Our experimental evaluation on large-scale NLP benchmarks like Amazon Reviews and Sentiment140 demonstrate the LELP is consistently competitive with, and typically superior to, existing state-of-the-art distillation algorithms for binary and few-class problems.
arXiv Detail & Related papers (2024-09-30T16:07:34Z) - I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation [1.433758865948252]
This paper proposes a new knowledge distillation method tailored for image semantic segmentation, termed Intra- and Inter-Class Knowledge Distillation (I2CKD)
The focus of this method is on capturing and transferring knowledge between the intermediate layers of teacher (cumbersome model) and student (compact model)
arXiv Detail & Related papers (2024-03-27T12:05:22Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Cyclic-Bootstrap Labeling for Weakly Supervised Object Detection [134.05510658882278]
Cyclic-Bootstrap Labeling (CBL) is a novel weakly supervised object detection pipeline.
Uses a weighted exponential moving average strategy to take advantage of various refinement modules.
A novel class-specific ranking distillation algorithm is proposed to leverage the output of weighted ensembled teacher network.
arXiv Detail & Related papers (2023-08-11T07:57:17Z) - Low-complexity deep learning frameworks for acoustic scene
classification using teacher-student scheme and multiple spectrograms [59.86658316440461]
The proposed system comprises two main phases: (Phase I) Training a teacher network; and (Phase II) training a student network using distilled knowledge from the teacher.
Our experiments conducted on DCASE 2023 Task 1 Development dataset have fulfilled the requirement of low-complexity and achieved the best classification accuracy of 57.4%.
arXiv Detail & Related papers (2023-05-16T14:21:45Z) - Active Teacher for Semi-Supervised Object Detection [80.10937030195228]
We propose a novel algorithm called Active Teacher for semi-supervised object detection (SSOD)
Active Teacher extends the teacher-student framework to an iterative version, where the label set is partially and gradually augmented by evaluating three key factors of unlabeled examples.
With this design, Active Teacher can maximize the effect of limited label information while improving the quality of pseudo-labels.
arXiv Detail & Related papers (2023-03-15T03:59:27Z) - Subclass Knowledge Distillation with Known Subclass Labels [28.182027210008656]
Subclass Knowledge Distillation (SKD) is a process of transferring the knowledge of predicted subclasses from a teacher to a smaller student.
A lightweight, low-complexity student trained with the SKD framework achieves an F1-score of 85.05%, an improvement of 1.47%, and a 2.10% gain over the student that is trained with and without conventional knowledge distillation.
arXiv Detail & Related papers (2022-07-17T03:14:05Z) - Knowledge Distillation Meets Open-Set Semi-Supervised Learning [69.21139647218456]
We propose a novel em modelname (bfem shortname) method dedicated for distilling representational knowledge semantically from a pretrained teacher to a target student.
At the problem level, this establishes an interesting connection between knowledge distillation with open-set semi-supervised learning (SSL)
Our shortname outperforms significantly previous state-of-the-art knowledge distillation methods on both coarse object classification and fine face recognition tasks.
arXiv Detail & Related papers (2022-05-13T15:15:27Z) - Multi-Teacher Knowledge Distillation for Incremental Implicitly-Refined
Classification [37.14755431285735]
We propose a novel Multi-Teacher Knowledge Distillation (MTKD) strategy for incremental learning.
To preserve the superclass knowledge, we use the initial model as a superclass teacher to distill the superclass knowledge for the student model.
We propose a post-processing mechanism, called as Top-k prediction restriction to reduce the redundant predictions.
arXiv Detail & Related papers (2022-02-23T09:51:40Z) - Subclass Distillation [94.18870689772544]
We show that it is possible to transfer most of the generalization ability of a teacher to a student.
For datasets where there are known, natural subclasses we demonstrate that the teacher learns similar subclasses.
For clickthrough datasets where the subclasses are unknown we demonstrate that subclass distillation allows the student to learn faster and better.
arXiv Detail & Related papers (2020-02-10T16:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.