Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT
- URL: http://arxiv.org/abs/2009.14822v2
- Date: Fri, 11 Dec 2020 09:43:34 GMT
- Title: Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT
- Authors: Ikhyun Cho, U Kang
- Abstract summary: Knowledge Distillation (KD) is one of the widely known methods for model compression.
Pea-KD consists of two main parts: Shuffled Sharing (SPS) and Pretraining with Teacher's Predictions (PTP)
- Score: 20.732095457775138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How can we efficiently compress a model while maintaining its performance?
Knowledge Distillation (KD) is one of the widely known methods for model
compression. In essence, KD trains a smaller student model based on a larger
teacher model and tries to retain the teacher model's level of performance as
much as possible. However, existing KD methods suffer from the following
limitations. First, since the student model is smaller in absolute size, it
inherently lacks model capacity. Second, the absence of an initial guide for
the student model makes it difficult for the student to imitate the teacher
model to its fullest. Conventional KD methods yield low performance due to
these limitations. In this paper, we propose Pea-KD (Parameter-efficient and
accurate Knowledge Distillation), a novel approach to KD. Pea-KD consists of
two main parts: Shuffled Parameter Sharing (SPS) and Pretraining with Teacher's
Predictions (PTP). Using this combination, we are capable of alleviating the
KD's limitations. SPS is a new parameter sharing method that increases the
student model capacity. PTP is a KD-specialized initialization method, which
can act as a good initial guide for the student. When combined, this method
yields a significant increase in student model's performance. Experiments
conducted on BERT with different datasets and tasks show that the proposed
approach improves the student model's performance by 4.4\% on average in four
GLUE tasks, outperforming existing KD baselines by significant margins.
Related papers
- Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling [81.00825302340984]
We introduce Speculative Knowledge Distillation (SKD) to generate high-quality training data on-the-fly.
In SKD, the student proposes tokens, and the teacher replaces poorly ranked ones based on its own distribution.
We evaluate SKD on various text generation tasks, including translation, summarization, math, and instruction following.
arXiv Detail & Related papers (2024-10-15T06:51:25Z) - Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models [62.5501109475725]
Knowledge distillation (KD) is a technique that compresses large teacher models by training smaller student models to mimic them.
This paper introduces Online Knowledge Distillation (OKD), where the teacher network integrates small online modules to concurrently train with the student model.
OKD achieves or exceeds the performance of leading methods in various model architectures and sizes, reducing training time by up to fourfold.
arXiv Detail & Related papers (2024-09-19T07:05:26Z) - Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model [13.367731896112861]
Knowledge distillation (KD) is one of the widely used compression techniques for edge deployment.
This paper proposes RobustKD, a robust KD that compresses the model while mitigating backdoor based on feature variance.
arXiv Detail & Related papers (2024-06-01T11:25:03Z) - Revisiting Knowledge Distillation for Autoregressive Language Models [88.80146574509195]
We propose a simple yet effective adaptive teaching approach (ATKD) to improve the knowledge distillation (KD)
The core of ATKD is to reduce rote learning and make teaching more diverse and flexible.
Experiments on 8 LM tasks show that, with the help of ATKD, various baseline KD methods can achieve consistent and significant performance gains.
arXiv Detail & Related papers (2024-02-19T07:01:10Z) - DistiLLM: Towards Streamlined Distillation for Large Language Models [53.46759297929675]
DistiLLM is a more effective and efficient KD framework for auto-regressive language models.
DisiLLM comprises two components: (1) a novel skew Kullback-Leibler divergence loss, where we unveil and leverage its theoretical properties, and (2) an adaptive off-policy approach designed to enhance the efficiency in utilizing student-generated outputs.
arXiv Detail & Related papers (2024-02-06T11:10:35Z) - Comparative Knowledge Distillation [102.35425896967791]
Traditional Knowledge Distillation (KD) assumes readily available access to teacher models for frequent inference.
We propose Comparative Knowledge Distillation (CKD), which encourages student models to understand the nuanced differences in a teacher model's interpretations of samples.
CKD consistently outperforms state of the art data augmentation and KD techniques.
arXiv Detail & Related papers (2023-11-03T21:55:33Z) - Knowledge Distillation with Representative Teacher Keys Based on
Attention Mechanism for Image Classification Model Compression [1.503974529275767]
knowledge distillation (KD) has been recognized as one of the effective method of model compression to decrease the model parameters.
Inspired by attention mechanism, we propose a novel KD method called representative teacher key (RTK)
Our proposed RTK can effectively improve the classification accuracy of the state-of-the-art attention-based KD method.
arXiv Detail & Related papers (2022-06-26T05:08:50Z) - DisCo: Effective Knowledge Distillation For Contrastive Learning of
Sentence Embeddings [36.37939188680754]
We propose an enhanced knowledge distillation framework termed Distill-Contrast (DisCo)
DisCo transfers the capability of a large sentence embedding model to a small student model on large unlabelled data.
We also propose Contrastive Knowledge Distillation (CKD) to enhance the consistencies among teacher model training, KD, and student model finetuning.
arXiv Detail & Related papers (2021-12-10T16:11:23Z) - Undistillable: Making A Nasty Teacher That CANNOT teach students [84.6111281091602]
This paper introduces and investigates a concept called Nasty Teacher: a specially trained teacher network that yields nearly the same performance as a normal one.
We propose a simple yet effective algorithm to build the nasty teacher, called self-undermining knowledge distillation.
arXiv Detail & Related papers (2021-05-16T08:41:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.