BD-KD: Balancing the Divergences for Online Knowledge Distillation
- URL: http://arxiv.org/abs/2212.12965v1
- Date: Sun, 25 Dec 2022 22:27:32 GMT
- Title: BD-KD: Balancing the Divergences for Online Knowledge Distillation
- Authors: Ibtihel Amara, Nazanin Sepahvand, Brett H. Meyer, Warren J. Gross and
James J. Clark
- Abstract summary: We propose BD-KD: Balancing of Divergences for online Knowledge Distillation.
We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network.
We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network.
- Score: 12.27903419909491
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge distillation (KD) has gained a lot of attention in the field of
model compression for edge devices thanks to its effectiveness in compressing
large powerful networks into smaller lower-capacity models. Online
distillation, in which both the teacher and the student are learning
collaboratively, has also gained much interest due to its ability to improve on
the performance of the networks involved. The Kullback-Leibler (KL) divergence
ensures the proper knowledge transfer between the teacher and student. However,
most online KD techniques present some bottlenecks under the network capacity
gap. By cooperatively and simultaneously training, the models the KL distance
becomes incapable of properly minimizing the teacher's and student's
distributions. Alongside accuracy, critical edge device applications are in
need of well-calibrated compact networks. Confidence calibration provides a
sensible way of getting trustworthy predictions. We propose BD-KD: Balancing of
Divergences for online Knowledge Distillation. We show that adaptively
balancing between the reverse and forward divergences shifts the focus of the
training strategy to the compact student network without limiting the teacher
network's learning process. We demonstrate that, by performing this balancing
design at the level of the student distillation loss, we improve upon both
performance accuracy and calibration of the compact student network. We
conducted extensive experiments using a variety of network architectures and
show improvements on multiple datasets including CIFAR-10, CIFAR-100,
Tiny-ImageNet, and ImageNet. We illustrate the effectiveness of our approach
through comprehensive comparisons and ablations with current state-of-the-art
online and offline KD techniques.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.