E-LANG: Energy-Based Joint Inferencing of Super and Swift Language
Models
- URL: http://arxiv.org/abs/2203.00748v1
- Date: Tue, 1 Mar 2022 21:21:27 GMT
- Title: E-LANG: Energy-Based Joint Inferencing of Super and Swift Language
Models
- Authors: Mohammad Akbari, Amin Banitalebi-Dehkordi, Yong Zhang
- Abstract summary: This paper proposes an effective dynamic inference approach, called E-Lang, which distributes the inference between large accurate Super-models and light-weight Swift models.
E-Lang is easily adoptable and architecture agnostic.
Unlike existing methods that are only applicable to encoder-only backbones and classification tasks, our method also works for encoder-decoder structures and sequence-to-sequence tasks such as translation.
- Score: 9.36591003178585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building huge and highly capable language models has been a trend in the past
years. Despite their great performance, they incur high computational cost. A
common solution is to apply model compression or choose light-weight
architectures, which often need a separate fixed-size model for each desirable
computational budget, and may lose performance in case of heavy compression.
This paper proposes an effective dynamic inference approach, called E-LANG,
which distributes the inference between large accurate Super-models and
light-weight Swift models. To this end, a decision making module routes the
inputs to Super or Swift models based on the energy characteristics of the
representations in the latent space. This method is easily adoptable and
architecture agnostic. As such, it can be applied to black-box pre-trained
models without a need for architectural manipulations, reassembling of modules,
or re-training. Unlike existing methods that are only applicable to
encoder-only backbones and classification tasks, our method also works for
encoder-decoder structures and sequence-to-sequence tasks such as translation.
The E-LANG performance is verified through a set of experiments with T5 and
BERT backbones on GLUE, SuperGLUE, and WMT. In particular, we outperform T5-11B
with an average computations speed-up of 3.3$\times$ on GLUE and 2.9$\times$ on
SuperGLUE. We also achieve BERT-based SOTA on GLUE with 3.2$\times$ less
computations. Code and demo are available in the supplementary materials.
Related papers
- XMoE: Sparse Models with Fine-grained and Adaptive Expert Selection [30.687511115573038]
tool is a novel MoE designed to enhance both the efficacy and efficiency of sparse MoE models.
tool can enhance model performance while decreasing the computation load at MoE layers by over 50% without sacrificing performance.
arXiv Detail & Related papers (2024-02-27T08:18:02Z) - PanGu-$\pi$: Enhancing Language Model Architectures via Nonlinearity
Compensation [97.78045712375047]
We present a new efficient model architecture for large language models (LLMs)
We show that PanGu-$pi$-7B can achieve a comparable performance to that of benchmarks with about 10% inference speed-up.
In addition, we have deployed PanGu-$pi$-7B in the high-value domains of finance and law, developing an LLM named YunShan for practical application.
arXiv Detail & Related papers (2023-12-27T11:49:24Z) - Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large
Language Models for Dynamic Inference [32.62084449979531]
We extend SortedNet to generative NLP tasks by replacing Standard Fine-Tuning (SFT) with Sorted Fine-Tuning (SoFT)
Our approach boosts model efficiency, eliminating the need for multiple models for various scenarios during inference.
Our results show the superior performance of sub-models in comparison to Standard Fine-Tuning and SFT+ICT (Early-Exit)
arXiv Detail & Related papers (2023-09-16T11:58:34Z) - NAIL: Lexical Retrieval Indices with Efficient Non-Autoregressive
Decoders [9.400555345874988]
We present a method of capturing up to 86% of the gains of a Transformer cross-attention model with a lexicalized scoring function.
We introduce NAIL as a model architecture that is compatible with recent encoder-decoder and decoder-only large language models.
arXiv Detail & Related papers (2023-05-23T20:09:52Z) - Cheaply Evaluating Inference Efficiency Metrics for Autoregressive
Transformer APIs [66.30706841821123]
Large language models (LLMs) power many state-of-the-art systems in natural language processing.
LLMs are extremely computationally expensive, even at inference time.
We propose a new metric for comparing inference efficiency across models.
arXiv Detail & Related papers (2023-05-03T21:51:42Z) - eP-ALM: Efficient Perceptual Augmentation of Language Models [70.47962271121389]
We propose to direct effort to efficient adaptations of existing models, and propose to augment Language Models with perception.
Existing approaches for adapting pretrained models for vision-language tasks still rely on several key components that hinder their efficiency.
We show that by freezing more than 99% of total parameters, training only one linear projection layer, and prepending only one trainable token, our approach (dubbed eP-ALM) significantly outperforms other baselines on VQA and Captioning.
arXiv Detail & Related papers (2023-03-20T19:20:34Z) - GLaM: Efficient Scaling of Language Models with Mixture-of-Experts [84.33607245023049]
We propose and develop a family of language models named GLaM (Generalist Language Model)
GLaM uses a sparsely activated mixture-of-experts architecture to scale the model capacity while also incurring substantially less training cost compared to dense variants.
It consumes only 1/3 of the energy used to train GPT-3 and requires half of the flops for inference, while still achieving better overall zero-shot and one-shot performance across 29 NLP tasks.
arXiv Detail & Related papers (2021-12-13T18:58:19Z) - DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language
Models [152.29364079385635]
As pre-trained models grow bigger, the fine-tuning process can be time-consuming and computationally expensive.
We propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights.
Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning and (ii) resource-efficient inference.
arXiv Detail & Related papers (2021-10-30T03:29:47Z) - Tiny Neural Models for Seq2Seq [0.0]
We propose a projection based encoder-decoder model referred to as pQRNN-MAtt.
The resulting quantized models are less than 3.5MB in size and are well suited for on-device latency critical applications.
We show that on MTOP, a challenging multilingual semantic parsing dataset, the average model performance surpasses LSTM based seq2seq model that uses pre-trained embeddings despite being 85x smaller.
arXiv Detail & Related papers (2021-08-07T00:39:42Z) - Exploring Sparse Expert Models and Beyond [51.90860155810848]
Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost.
We propose a simple method called expert prototyping that splits experts into different prototypes and applies $k$ top-$1$ routing.
This strategy improves the model quality but maintains constant computational costs, and our further exploration on extremely large-scale models reflects that it is more effective in training larger models.
arXiv Detail & Related papers (2021-05-31T16:12:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.