BD-KD: Balancing the Divergences for Online Knowledge Distillation
- URL: http://arxiv.org/abs/2212.12965v1
- Date: Sun, 25 Dec 2022 22:27:32 GMT
- Title: BD-KD: Balancing the Divergences for Online Knowledge Distillation
- Authors: Ibtihel Amara, Nazanin Sepahvand, Brett H. Meyer, Warren J. Gross and
James J. Clark
- Abstract summary: We propose BD-KD: Balancing of Divergences for online Knowledge Distillation.
We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network.
We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network.
- Score: 12.27903419909491
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge distillation (KD) has gained a lot of attention in the field of
model compression for edge devices thanks to its effectiveness in compressing
large powerful networks into smaller lower-capacity models. Online
distillation, in which both the teacher and the student are learning
collaboratively, has also gained much interest due to its ability to improve on
the performance of the networks involved. The Kullback-Leibler (KL) divergence
ensures the proper knowledge transfer between the teacher and student. However,
most online KD techniques present some bottlenecks under the network capacity
gap. By cooperatively and simultaneously training, the models the KL distance
becomes incapable of properly minimizing the teacher's and student's
distributions. Alongside accuracy, critical edge device applications are in
need of well-calibrated compact networks. Confidence calibration provides a
sensible way of getting trustworthy predictions. We propose BD-KD: Balancing of
Divergences for online Knowledge Distillation. We show that adaptively
balancing between the reverse and forward divergences shifts the focus of the
training strategy to the compact student network without limiting the teacher
network's learning process. We demonstrate that, by performing this balancing
design at the level of the student distillation loss, we improve upon both
performance accuracy and calibration of the compact student network. We
conducted extensive experiments using a variety of network architectures and
show improvements on multiple datasets including CIFAR-10, CIFAR-100,
Tiny-ImageNet, and ImageNet. We illustrate the effectiveness of our approach
through comprehensive comparisons and ablations with current state-of-the-art
online and offline KD techniques.
Related papers
- Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models [62.5501109475725]
Knowledge distillation (KD) is a technique that compresses large teacher models by training smaller student models to mimic them.
This paper introduces Online Knowledge Distillation (OKD), where the teacher network integrates small online modules to concurrently train with the student model.
OKD achieves or exceeds the performance of leading methods in various model architectures and sizes, reducing training time by up to fourfold.
arXiv Detail & Related papers (2024-09-19T07:05:26Z) - Invariant Causal Knowledge Distillation in Neural Networks [6.24302896438145]
In this paper, we introduce Invariant Consistency Distillation (ICD), a novel methodology designed to enhance knowledge distillation.
ICD ensures that the student model's representations are both discriminative and invariant with respect to the teacher's outputs.
Our results on CIFAR-100 and ImageNet ILSVRC-2012 show that ICD outperforms traditional KD techniques and surpasses state-of-the-art methods.
arXiv Detail & Related papers (2024-07-16T14:53:35Z) - Adaptive Teaching with Shared Classifier for Knowledge Distillation [6.03477652126575]
Knowledge distillation (KD) is a technique used to transfer knowledge from a teacher network to a student network.
We propose adaptive teaching with a shared classifier (ATSC)
Our approach achieves state-of-the-art results on the CIFAR-100 and ImageNet datasets in both single-teacher and multiteacher scenarios.
arXiv Detail & Related papers (2024-06-12T08:51:08Z) - Robustness-Reinforced Knowledge Distillation with Correlation Distance
and Network Pruning [3.1423836318272773]
Knowledge distillation (KD) improves the performance of efficient and lightweight models.
Most existing KD techniques rely on Kullback-Leibler (KL) divergence.
We propose a Robustness-Reinforced Knowledge Distillation (R2KD) that leverages correlation distance and network pruning.
arXiv Detail & Related papers (2023-11-23T11:34:48Z) - Feature-domain Adaptive Contrastive Distillation for Efficient Single
Image Super-Resolution [3.2453621806729234]
CNN-based SISR has numerous parameters and high computational cost to achieve better performance.
Knowledge Distillation (KD) transfers teacher's useful knowledge to student.
We propose a feature-domain adaptive contrastive distillation (FACD) method for efficiently training lightweight student SISR networks.
arXiv Detail & Related papers (2022-11-29T06:24:14Z) - CES-KD: Curriculum-based Expert Selection for Guided Knowledge
Distillation [4.182345120164705]
This paper proposes a new technique called Curriculum Expert Selection for Knowledge Distillation (CES-KD)
CES-KD is built upon the hypothesis that a student network should be guided gradually using stratified teaching curriculum.
Specifically, our method is a gradual TA-based KD technique that selects a single teacher per input image based on a curriculum driven by the difficulty in classifying the image.
arXiv Detail & Related papers (2022-09-15T21:02:57Z) - Parameter-Efficient and Student-Friendly Knowledge Distillation [83.56365548607863]
We present a parameter-efficient and student-friendly knowledge distillation method, namely PESF-KD, to achieve efficient and sufficient knowledge transfer.
Experiments on a variety of benchmarks show that PESF-KD can significantly reduce the training cost while obtaining competitive results compared to advanced online distillation methods.
arXiv Detail & Related papers (2022-05-28T16:11:49Z) - How and When Adversarial Robustness Transfers in Knowledge Distillation? [137.11016173468457]
This paper studies how and when the adversarial robustness can be transferred from a teacher model to a student model in Knowledge distillation (KD)
We show that standard KD training fails to preserve adversarial robustness, and we propose KD with input gradient alignment (KDIGA) for remedy.
Under certain assumptions, we prove that the student model using our proposed KDIGA can achieve at least the same certified robustness as the teacher model.
arXiv Detail & Related papers (2021-10-22T21:30:53Z) - MixKD: Towards Efficient Distillation of Large-scale Language Models [129.73786264834894]
We propose MixKD, a data-agnostic distillation framework, to endow the resulting model with stronger generalization ability.
We prove from a theoretical perspective that under reasonable conditions MixKD gives rise to a smaller gap between the error and the empirical error.
Experiments under a limited-data setting and ablation studies further demonstrate the advantages of the proposed approach.
arXiv Detail & Related papers (2020-11-01T18:47:51Z) - Heterogeneous Knowledge Distillation using Information Flow Modeling [82.83891707250926]
We propose a novel KD method that works by modeling the information flow through the various layers of the teacher model.
The proposed method is capable of overcoming the aforementioned limitations by using an appropriate supervision scheme during the different phases of the training process.
arXiv Detail & Related papers (2020-05-02T06:56:56Z) - Efficient Crowd Counting via Structured Knowledge Transfer [122.30417437707759]
Crowd counting is an application-oriented task and its inference efficiency is crucial for real-world applications.
We propose a novel Structured Knowledge Transfer framework to generate a lightweight but still highly effective student network.
Our models obtain at least 6.5$times$ speed-up on an Nvidia 1080 GPU and even achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-03-23T08:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.