Subclass Knowledge Distillation with Known Subclass Labels
- URL: http://arxiv.org/abs/2207.08063v1
- Date: Sun, 17 Jul 2022 03:14:05 GMT
- Title: Subclass Knowledge Distillation with Known Subclass Labels
- Authors: Ahmad Sajedi, Yuri A. Lawryshyn, Konstantinos N. Plataniotis
- Abstract summary: Subclass Knowledge Distillation (SKD) is a process of transferring the knowledge of predicted subclasses from a teacher to a smaller student.
A lightweight, low-complexity student trained with the SKD framework achieves an F1-score of 85.05%, an improvement of 1.47%, and a 2.10% gain over the student that is trained with and without conventional knowledge distillation.
- Score: 28.182027210008656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces a novel knowledge distillation framework for
classification tasks where information on existing subclasses is available and
taken into consideration. In classification tasks with a small number of
classes or binary detection, the amount of information transferred from the
teacher to the student is restricted, thus limiting the utility of knowledge
distillation. Performance can be improved by leveraging information of possible
subclasses within the classes. To that end, we propose the so-called Subclass
Knowledge Distillation (SKD), a process of transferring the knowledge of
predicted subclasses from a teacher to a smaller student. Meaningful
information that is not in the teacher's class logits but exists in subclass
logits (e.g., similarities within classes) will be conveyed to the student
through the SKD, which will then boost the student's performance. Analytically,
we measure how much extra information the teacher can provide the student via
the SKD to demonstrate the efficacy of our work. The framework developed is
evaluated in clinical application, namely colorectal polyp binary
classification. It is a practical problem with two classes and a number of
subclasses per class. In this application, clinician-provided annotations are
used to define subclasses based on the annotation label's variability in a
curriculum style of learning. A lightweight, low-complexity student trained
with the SKD framework achieves an F1-score of 85.05%, an improvement of 1.47%,
and a 2.10% gain over the student that is trained with and without conventional
knowledge distillation, respectively. The 2.10% F1-score gap between students
trained with and without the SKD can be explained by the extra subclass
knowledge, i.e., the extra 0.4656 label bits per sample that the teacher can
transfer in our experiment.
Related papers
- Linear Projections of Teacher Embeddings for Few-Class Distillation [14.99228980898161]
Knowledge Distillation (KD) has emerged as a promising approach for transferring knowledge from a larger, more complex teacher model to a smaller student model.
We introduce a novel method for distilling knowledge from the teacher's model representations, which we term Learning Embedding Linear Projections (LELP)
Our experimental evaluation on large-scale NLP benchmarks like Amazon Reviews and Sentiment140 demonstrate the LELP is consistently competitive with, and typically superior to, existing state-of-the-art distillation algorithms for binary and few-class problems.
arXiv Detail & Related papers (2024-09-30T16:07:34Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Knowledge Distillation Layer that Lets the Student Decide [6.689381216751284]
We propose a learnable KD layer for the student which improves KD with two distinct abilities.
i) learning how to leverage the teacher's knowledge, enabling to discard nuisance information, and ii) feeding forward the transferred knowledge deeper.
arXiv Detail & Related papers (2023-09-06T09:05:03Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Active Teacher for Semi-Supervised Object Detection [80.10937030195228]
We propose a novel algorithm called Active Teacher for semi-supervised object detection (SSOD)
Active Teacher extends the teacher-student framework to an iterative version, where the label set is partially and gradually augmented by evaluating three key factors of unlabeled examples.
With this design, Active Teacher can maximize the effect of limited label information while improving the quality of pseudo-labels.
arXiv Detail & Related papers (2023-03-15T03:59:27Z) - Knowledge Distillation Meets Open-Set Semi-Supervised Learning [69.21139647218456]
We propose a novel em modelname (bfem shortname) method dedicated for distilling representational knowledge semantically from a pretrained teacher to a target student.
At the problem level, this establishes an interesting connection between knowledge distillation with open-set semi-supervised learning (SSL)
Our shortname outperforms significantly previous state-of-the-art knowledge distillation methods on both coarse object classification and fine face recognition tasks.
arXiv Detail & Related papers (2022-05-13T15:15:27Z) - Generalized Knowledge Distillation via Relationship Matching [53.69235109551099]
Knowledge of a well-trained deep neural network (a.k.a. the "teacher") is valuable for learning similar tasks.
Knowledge distillation extracts knowledge from the teacher and integrates it with the target model.
Instead of enforcing the teacher to work on the same task as the student, we borrow the knowledge from a teacher trained from a general label space.
arXiv Detail & Related papers (2022-05-04T06:49:47Z) - Multi-Teacher Knowledge Distillation for Incremental Implicitly-Refined
Classification [37.14755431285735]
We propose a novel Multi-Teacher Knowledge Distillation (MTKD) strategy for incremental learning.
To preserve the superclass knowledge, we use the initial model as a superclass teacher to distill the superclass knowledge for the student model.
We propose a post-processing mechanism, called as Top-k prediction restriction to reduce the redundant predictions.
arXiv Detail & Related papers (2022-02-23T09:51:40Z) - On the Efficiency of Subclass Knowledge Distillation in Classification
Tasks [33.1278647424578]
Subclass Knowledge Distillation (SKD) framework is a process of transferring the subclasses' prediction knowledge from a large teacher model into a smaller student one.
The framework is evaluated in clinical application, namely colorectal polyp binary classification.
A lightweight, low complexity student trained with the proposed framework achieves an F1-score of 85.05%, an improvement of 2.14% and 1.49% gain over the student that trains without.
arXiv Detail & Related papers (2021-09-12T19:04:44Z) - Subclass Distillation [94.18870689772544]
We show that it is possible to transfer most of the generalization ability of a teacher to a student.
For datasets where there are known, natural subclasses we demonstrate that the teacher learns similar subclasses.
For clickthrough datasets where the subclasses are unknown we demonstrate that subclass distillation allows the student to learn faster and better.
arXiv Detail & Related papers (2020-02-10T16:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.