Why Can You Lay Off Heads? Investigating How BERT Heads Transfer
- URL: http://arxiv.org/abs/2106.07137v1
- Date: Mon, 14 Jun 2021 02:27:47 GMT
- Title: Why Can You Lay Off Heads? Investigating How BERT Heads Transfer
- Authors: Ting-Rui Chiang, Yun-Nung Chen
- Abstract summary: The main goal of distillation is to create a task-agnostic pre-trained model that can be fine-tuned on downstream tasks without fine-tuning its full-sized version.
Despite the progress of distillation, to what degree and for what reason a task-agnostic model can be created from distillation has not been well studied.
This work focuses on analyzing the acceptable deduction when distillation for guiding the future distillation procedure.
- Score: 37.9520341259181
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The huge size of the widely used BERT family models has led to recent efforts
about model distillation. The main goal of distillation is to create a
task-agnostic pre-trained model that can be fine-tuned on downstream tasks
without fine-tuning its full-sized version. Despite the progress of
distillation, to what degree and for what reason a task-agnostic model can be
created from distillation has not been well studied. Also, the mechanisms
behind transfer learning of those BERT models are not well investigated either.
Therefore, this work focuses on analyzing the acceptable deduction when
distillation for guiding the future distillation procedure. Specifically, we
first inspect the prunability of the Transformer heads in RoBERTa and ALBERT
using their head importance estimation proposed by Michel et al. (2019), and
then check the coherence of the important heads between the pre-trained task
and downstream tasks. Hence, the acceptable deduction of performance on the
pre-trained task when distilling a model can be derived from the results, and
we further compare the behavior of the pruned model before and after
fine-tuning. Our studies provide guidance for future directions about BERT
family model distillation.
Related papers
- Confidence Preservation Property in Knowledge Distillation Abstractions [2.9370710299422598]
Social media platforms prevent malicious activities by detecting harmful content of posts and comments.
They employ large-scale deep neural network language models for sentiment analysis and content understanding.
Some models, like BERT, are complex, and have numerous parameters, which makes them expensive to operate and maintain.
Industry experts employ a knowledge distillation compression technique, where a distilled model is trained to reproduce the classification behavior of the original model.
arXiv Detail & Related papers (2024-01-21T01:37:25Z) - A Study on Knowledge Distillation from Weak Teacher for Scaling Up
Pre-trained Language Models [104.64899255277443]
Distillation from Weak Teacher (DWT) is a method of transferring knowledge from a smaller, weaker teacher model to a larger student model to improve its performance.
This study examines three key factors to optimize DWT, distinct from those used in the vision domain or traditional knowledge distillation.
arXiv Detail & Related papers (2023-05-26T13:24:49Z) - Generic-to-Specific Distillation of Masked Autoencoders [119.21281960831651]
We propose generic-to-specific distillation (G2SD) to tap the potential of small ViT models under the supervision of large models pre-trained by masked autoencoders.
With G2SD, the vanilla ViT-Small model achieves 98.7%, 98.1% and 99.3% the performance of its teacher for image classification, object detection, and semantic segmentation.
arXiv Detail & Related papers (2023-02-28T17:13:14Z) - HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained
Transformers [49.79405257763856]
This paper focuses on task-agnostic distillation.
It produces a compact pre-trained model that can be easily fine-tuned on various tasks with small computational costs and memory footprints.
We propose Homotopic Distillation (HomoDistil), a novel task-agnostic distillation approach equipped with iterative pruning.
arXiv Detail & Related papers (2023-02-19T17:37:24Z) - DETRDistill: A Universal Knowledge Distillation Framework for
DETR-families [11.9748352746424]
Transformer-based detectors (DETRs) have attracted great attention due to their sparse training paradigm and the removal of post-processing operations.
Knowledge distillation (KD) can be employed to compress the huge model by constructing a universal teacher-student learning framework.
arXiv Detail & Related papers (2022-11-17T13:35:11Z) - Pre-trained Summarization Distillation [121.14806854092672]
Recent work on distilling BERT for classification and regression tasks shows strong performance using direct knowledge distillation.
Alternatively, machine translation practitioners distill using pseudo-labeling, where a small model is trained on the translations of a larger model.
A third, simpler approach is to'shrink and fine-tune' (SFT), which avoids any explicit distillation by copying parameters to a smaller student model and then fine-tuning.
arXiv Detail & Related papers (2020-10-24T23:15:43Z) - TernaryBERT: Distillation-aware Ultra-low Bit BERT [53.06741585060951]
We propose TernaryBERT, which ternarizes the weights in a fine-tuned BERT model.
Experiments on the GLUE benchmark and SQuAD show that our proposed TernaryBERT outperforms the other BERT quantization methods.
arXiv Detail & Related papers (2020-09-27T10:17:28Z) - Towards Non-task-specific Distillation of BERT via Sentence
Representation Approximation [17.62309851473892]
We propose a sentence representation approximating oriented distillation framework that can distill the pre-trained BERT into a simple LSTM based model.
Our model is able to perform transfer learning via fine-tuning to adapt to any sentence-level downstream task.
The experimental results on multiple NLP tasks from the GLUE benchmark show that our approach outperforms other task-specific distillation methods.
arXiv Detail & Related papers (2020-04-07T03:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.