SKDBERT: Compressing BERT via Stochastic Knowledge Distillation
- URL: http://arxiv.org/abs/2211.14466v2
- Date: Tue, 29 Nov 2022 04:12:02 GMT
- Title: SKDBERT: Compressing BERT via Stochastic Knowledge Distillation
- Authors: Zixiang Ding, Guoqing Jiang, Shuai Zhang, Lin Guo, Wei Lin
- Abstract summary: We propose Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT.
In each iteration, SKD samples a teacher model from a pre-defined teacher ensemble, which consists of multiple teacher models with multi-level capacities, to transfer knowledge into student model in an one-to-one manner.
Experimental results on GLUE benchmark show that SKDBERT reduces the size of a BERT$_rm BASE$ model by 40% while retaining 99.5% performances of language understanding and being 100% faster.
- Score: 17.589678394344475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain
compact BERT-style language model dubbed SKDBERT. In each iteration, SKD
samples a teacher model from a pre-defined teacher ensemble, which consists of
multiple teacher models with multi-level capacities, to transfer knowledge into
student model in an one-to-one manner. Sampling distribution plays an important
role in SKD. We heuristically present three types of sampling distributions to
assign appropriate probabilities for multi-level teacher models. SKD has two
advantages: 1) it can preserve the diversities of multi-level teacher models
via stochastically sampling single teacher model in each iteration, and 2) it
can also improve the efficacy of knowledge distillation via multi-level teacher
models when large capacity gap exists between the teacher model and the student
model. Experimental results on GLUE benchmark show that SKDBERT reduces the
size of a BERT$_{\rm BASE}$ model by 40% while retaining 99.5% performances of
language understanding and being 100% faster.
Related papers
- Capturing Nuanced Preferences: Preference-Aligned Distillation for Small Language Models [22.613040767122225]
We propose a Preference-Aligned Distillation framework, which models teacher's preference knowledge as a probability distribution over all potential preferences.
Experiments on four mainstream alignment benchmarks demonstrate that PAD consistently and significantly outperforms existing approaches.
arXiv Detail & Related papers (2025-02-20T05:18:23Z) - Dual-Teacher Ensemble Models with Double-Copy-Paste for 3D Semi-Supervised Medical Image Segmentation [31.460549289419923]
Semi-supervised learning (SSL) techniques address the high labeling costs in 3D medical image segmentation.
We introduce the Staged Selective Ensemble (SSE) module, which selects different ensemble methods based on the characteristics of the samples.
Experimental results demonstrate the effectiveness of our proposed method in 3D medical image segmentation tasks.
arXiv Detail & Related papers (2024-10-15T11:23:15Z) - Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling [81.00825302340984]
We introduce Speculative Knowledge Distillation (SKD) to generate high-quality training data on-the-fly.
In SKD, the student proposes tokens, and the teacher replaces poorly ranked ones based on its own distribution.
We evaluate SKD on various text generation tasks, including translation, summarization, math, and instruction following.
arXiv Detail & Related papers (2024-10-15T06:51:25Z) - Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution Alignment [10.104085497265004]
We propose Ranking Loss based Knowledge Distillation (RLKD), which encourages consistency of peak predictions between the teacher and student models.
Our method enables the student model to better learn the multi-modal distributions of the teacher model, leading to a significant performance improvement in various downstream tasks.
arXiv Detail & Related papers (2024-09-19T08:06:42Z) - Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models [62.5501109475725]
Knowledge distillation (KD) is a technique that compresses large teacher models by training smaller student models to mimic them.
This paper introduces Online Knowledge Distillation (OKD), where the teacher network integrates small online modules to concurrently train with the student model.
OKD achieves or exceeds the performance of leading methods in various model architectures and sizes, reducing training time by up to fourfold.
arXiv Detail & Related papers (2024-09-19T07:05:26Z) - Multi Teacher Privileged Knowledge Distillation for Multimodal Expression Recognition [58.41784639847413]
Human emotion is a complex phenomenon conveyed and perceived through facial expressions, vocal tones, body language, and physiological signals.
In this paper, a multi-teacher PKD (MT-PKDOT) method with self-distillation is introduced to align diverse teacher representations before distilling them to the student.
Results indicate that our proposed method can outperform SOTA PKD methods.
arXiv Detail & Related papers (2024-08-16T22:11:01Z) - Lightweight Self-Knowledge Distillation with Multi-source Information
Fusion [3.107478665474057]
Knowledge Distillation (KD) is a powerful technique for transferring knowledge between neural network models.
We propose a lightweight SKD framework that utilizes multi-source information to construct a more informative teacher.
We validate the performance of the proposed DRG, DSR, and their combination through comprehensive experiments on various datasets and models.
arXiv Detail & Related papers (2023-05-16T05:46:31Z) - One Teacher is Enough? Pre-trained Language Model Distillation from
Multiple Teachers [54.146208195806636]
We propose a multi-teacher knowledge distillation framework named MT-BERT for pre-trained language model compression.
We show that MT-BERT can train high-quality student model from multiple teacher PLMs.
Experiments on three benchmark datasets validate the effectiveness of MT-BERT in compressing PLMs.
arXiv Detail & Related papers (2021-06-02T08:42:33Z) - Reinforced Multi-Teacher Selection for Knowledge Distillation [54.72886763796232]
knowledge distillation is a popular method for model compression.
Current methods assign a fixed weight to a teacher model in the whole distillation.
Most of the existing methods allocate an equal weight to every teacher model.
In this paper, we observe that, due to the complexity of training examples and the differences in student model capability, learning differentially from teacher models can lead to better performance of student models distilled.
arXiv Detail & Related papers (2020-12-11T08:56:39Z) - Structure-Level Knowledge Distillation For Multilingual Sequence
Labeling [73.40368222437912]
We propose to reduce the gap between monolingual models and the unified multilingual model by distilling the structural knowledge of several monolingual models to the unified multilingual model (student)
Our experiments on 4 multilingual tasks with 25 datasets show that our approaches outperform several strong baselines and have stronger zero-shot generalizability than both the baseline model and teacher models.
arXiv Detail & Related papers (2020-04-08T07:14:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.