Learning from a Lightweight Teacher for Efficient Knowledge Distillation
- URL: http://arxiv.org/abs/2005.09163v1
- Date: Tue, 19 May 2020 01:54:15 GMT
- Title: Learning from a Lightweight Teacher for Efficient Knowledge Distillation
- Authors: Yuang Liu, Wei Zhang, Jun Wang
- Abstract summary: This paper proposes LW-KD, short for lightweight knowledge distillation.
It firstly trains a lightweight teacher network on a synthesized simple dataset, with an adjustable class number equal to that of a target dataset.
The teacher then generates soft target whereby an enhanced KD loss could guide student learning, which is a combination of KD loss and adversarial loss for making student output indistinguishable from the output of the teacher.
- Score: 14.865673786025525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Distillation (KD) is an effective framework for compressing deep
learning models, realized by a student-teacher paradigm requiring small student
networks to mimic the soft target generated by well-trained teachers. However,
the teachers are commonly assumed to be complex and need to be trained on the
same datasets as students. This leads to a time-consuming training process. The
recent study shows vanilla KD plays a similar role as label smoothing and
develops teacher-free KD, being efficient and mitigating the issue of learning
from heavy teachers. But because teacher-free KD relies on manually-crafted
output distributions kept the same for all data instances belonging to the same
class, its flexibility and performance are relatively limited. To address the
above issues, this paper proposes en efficient knowledge distillation learning
framework LW-KD, short for lightweight knowledge distillation. It firstly
trains a lightweight teacher network on a synthesized simple dataset, with an
adjustable class number equal to that of a target dataset. The teacher then
generates soft target whereby an enhanced KD loss could guide student learning,
which is a combination of KD loss and adversarial loss for making student
output indistinguishable from the output of the teacher. Experiments on several
public datasets with different modalities demonstrate LWKD is effective and
efficient, showing the rationality of its main design principles.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.