Layer Reduction: Accelerating Conformer-Based Self-Supervised Model via
Layer Consistency
- URL: http://arxiv.org/abs/2105.00812v1
- Date: Thu, 8 Apr 2021 08:21:59 GMT
- Title: Layer Reduction: Accelerating Conformer-Based Self-Supervised Model via
Layer Consistency
- Authors: Jinchuan Tian, Rongzhi Gu, Helin Wang, Yuexian Zou
- Abstract summary: Transformer-based self-supervised models are trained as feature extractors and have empowered many downstream speech tasks to achieve state-of-the-art performance.
We experimentally achieve 7.8X parameter reduction, 41.9% training speedup and 37.7% inference speedup while maintaining comparable performance with conventional BERT-like self-supervised methods.
- Score: 31.572652956170252
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Transformer-based self-supervised models are trained as feature extractors
and have empowered many downstream speech tasks to achieve state-of-the-art
performance. However, both the training and inference process of these models
may encounter prohibitively high computational cost and large parameter budget.
Although Parameter Sharing Strategy (PSS) proposed in ALBERT paves the way for
parameter reduction, the computation required remains the same. Interestingly,
we found in experiments that distributions of feature embeddings from different
Transformer layers are similar when PSS is integrated: a property termed as
Layer Consistency (LC) in this paper. Given this similarity of feature
distributions, we assume that feature embeddings from different layers would
have similar representing power. In this work, Layer Consistency enables us to
adopt Transformer-based models in a more efficient manner: the number of
Conformer layers in each training iteration could be uniformly sampled and
Shallow Layer Inference (SLI) could be applied to reduce the number of layers
in inference stage. In experiments, our models are trained with LibriSpeech
dataset and then evaluated on both phone classification and Speech Recognition
tasks. We experimentally achieve 7.8X parameter reduction, 41.9% training
speedup and 37.7% inference speedup while maintaining comparable performance
with conventional BERT-like self-supervised methods.
Related papers
- Representation Similarity: A Better Guidance of DNN Layer Sharing for Edge Computing without Training [3.792729116385123]
We propose a new model merging scheme by sharing representations at the edge, guided by representation similarity S.
We show that S is extremely highly correlated with merged model's accuracy with Pearson Correlation Coefficient |r| > 0.94 than other metrics.
arXiv Detail & Related papers (2024-10-15T03:35:54Z) - LayerShuffle: Enhancing Robustness in Vision Transformers by Randomizing Layer Execution Order [10.362659730151591]
We show that vision transformers can adapt to arbitrary layer execution orders at test time.
We also find that our trained models can be randomly merged with each other resulting in functional "Frankenstein" models.
arXiv Detail & Related papers (2024-07-05T13:54:15Z) - Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - On Layer-wise Representation Similarity: Application for Multi-Exit Models with a Single Classifier [20.17288970927518]
We study the similarity of representations between the hidden layers of individual transformers.
We propose an aligned training approach to enhance the similarity between internal representations.
arXiv Detail & Related papers (2024-06-20T16:41:09Z) - Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching [56.286064975443026]
We make an interesting and somehow surprising observation: the computation of a large proportion of layers in the diffusion transformer, through a caching mechanism, can be readily removed even without updating the model parameters.
We introduce a novel scheme, named Learningto-Cache (L2C), that learns to conduct caching in a dynamic manner for diffusion transformers.
Experimental results show that L2C largely outperforms samplers such as DDIM and DPM-r, alongside prior cache-based methods at the same inference speed.
arXiv Detail & Related papers (2024-06-03T18:49:57Z) - Accelerating Inference in Large Language Models with a Unified Layer Skipping Strategy [67.45518210171024]
Dynamic computation methods have shown notable acceleration for Large Language Models (LLMs) by skipping several layers of computations.
We propose a Unified Layer Skipping strategy, which selects the number of layers to skip computation based solely on the target speedup ratio.
Experimental results on two common tasks, i.e., machine translation and text summarization, indicate that given a target speedup ratio, the Unified Layer Skipping strategy significantly enhances both the inference performance and the actual model throughput.
arXiv Detail & Related papers (2024-04-10T12:12:07Z) - Dynamic Layer Tying for Parameter-Efficient Transformers [65.268245109828]
We employ Reinforcement Learning to select layers during training and tie them together.
This facilitates weight sharing, reduces the number of trainable parameters, and also serves as an effective regularization technique.
In particular, the memory consumption during training is up to one order of magnitude less than the conventional training method.
arXiv Detail & Related papers (2024-01-23T14:53:20Z) - Layer Pruning on Demand with Intermediate CTC [50.509073206630994]
We present a training and pruning method for ASR based on the connectionist temporal classification (CTC)
We show that a Transformer-CTC model can be pruned in various depth on demand, improving real-time factor from 0.005 to 0.002 on GPU.
arXiv Detail & Related papers (2021-06-17T02:40:18Z) - IOT: Instance-wise Layer Reordering for Transformer Structures [173.39918590438245]
We break the assumption of the fixed layer order in the Transformer and introduce instance-wise layer reordering into the model structure.
Our method can also be applied to other architectures beyond Transformer.
arXiv Detail & Related papers (2021-03-05T03:44:42Z) - On the Effect of Dropping Layers of Pre-trained Transformer Models [35.25025837133909]
We explore strategies to drop layers in pre-trained models, and observe the effect of pruning on downstream GLUE tasks.
We were able to prune BERT, RoBERTa and XLNet models up to 40%, while maintaining up to 98% of their original performance.
Our experiments yield interesting observations such as, (i) the lower layers are most critical to maintain downstream task performance, (ii) some tasks such as paraphrase detection and sentence similarity are more robust to the dropping of layers, and (iii) models trained using a different objective function exhibit different learning patterns and w.r.t the layer dropping
arXiv Detail & Related papers (2020-04-08T07:09:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.