BabyBear: Cheap inference triage for expensive language models
- URL: http://arxiv.org/abs/2205.11747v1
- Date: Tue, 24 May 2022 03:21:07 GMT
- Title: BabyBear: Cheap inference triage for expensive language models
- Authors: Leila Khalili, Yao You, John Bohannon
- Abstract summary: We introduce BabyBear, a framework for cascading models for natural language processing (NLP) tasks.
We find that for common NLP tasks a high proportion of the inference load can be accomplished with cheap, fast models that have learned by observing a deep learning model.
This allows us to reduce the compute cost of large-scale classification jobs by more than 50% while retaining overall accuracy.
- Score: 9.023847175654602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformer language models provide superior accuracy over previous models
but they are computationally and environmentally expensive. Borrowing the
concept of model cascading from computer vision, we introduce BabyBear, a
framework for cascading models for natural language processing (NLP) tasks to
minimize cost. The core strategy is inference triage, exiting early when the
least expensive model in the cascade achieves a sufficiently high-confidence
prediction. We test BabyBear on several open source data sets related to
document classification and entity recognition. We find that for common NLP
tasks a high proportion of the inference load can be accomplished with cheap,
fast models that have learned by observing a deep learning model. This allows
us to reduce the compute cost of large-scale classification jobs by more than
50% while retaining overall accuracy. For named entity recognition, we save 33%
of the deep learning compute while maintaining an F1 score higher than 95% on
the CoNLL benchmark.
Related papers
- Model Cascading for Code: Reducing Inference Costs with Model Cascading for LLM Based Code Generation [20.445496441396028]
We propose letting each model generate and execute a set of test cases for their solutions, and use the test results as the cascading threshold.
We show that our model cascading strategy reduces computational costs while increases accuracy compared to generating the output with a single model.
arXiv Detail & Related papers (2024-05-24T16:20:04Z) - Language models scale reliably with over-training and on downstream tasks [121.69867718185125]
Scaling laws are useful guides for derisking expensive training runs.
However, there remain gaps between current studies and how language models are trained.
In contrast, scaling laws mostly predict loss on inference, but models are usually compared on downstream task performance.
arXiv Detail & Related papers (2024-03-13T13:54:00Z) - $C^3$: Confidence Calibration Model Cascade for Inference-Efficient
Cross-Lingual Natural Language Understanding [28.853593305486832]
Cross-lingual natural language understanding (NLU) is a critical task in natural language processing (NLP)
Recent advancements have seen multilingual pre-trained language models (mPLMs) significantly enhance the performance of these tasks.
Existing model cascade methods seek to enhance inference efficiency by greedily selecting the lightest model capable of processing the current input from a variety of models.
arXiv Detail & Related papers (2024-02-25T05:07:56Z) - MiniSUPERB: Lightweight Benchmark for Self-supervised Speech Models [90.99663022952498]
SuperB was proposed to evaluate the generalizability of self-supervised learning (SSL) speech models across various tasks.
SuperB incurs high computational costs due to the large datasets and diverse tasks.
We introduce MiniSUPERB, a lightweight benchmark that efficiently evaluates SSL speech models with comparable results to SUPERB but lower computational costs significantly.
arXiv Detail & Related papers (2023-05-30T13:07:33Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - bert2BERT: Towards Reusable Pretrained Language Models [51.078081486422896]
We propose bert2BERT, which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model.
bert2BERT saves about 45% and 47% computational cost of pre-training BERT_BASE and GPT_BASE by reusing the models of almost their half sizes.
arXiv Detail & Related papers (2021-10-14T04:05:25Z) - Cold-start Active Learning through Self-supervised Language Modeling [15.551710499866239]
Active learning aims to reduce annotation costs by choosing the most critical examples to label.
With BERT, we develop a simple strategy based on the masked language modeling loss.
Compared to other baselines, our approach reaches higher accuracy within less sampling iterations and time.
arXiv Detail & Related papers (2020-10-19T14:09:17Z) - The Right Tool for the Job: Matching Model and Instance Complexities [62.95183777679024]
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.
We propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit"
We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.
arXiv Detail & Related papers (2020-04-16T04:28:08Z) - Parameter Space Factorization for Zero-Shot Learning across Tasks and
Languages [112.65994041398481]
We propose a Bayesian generative model for the space of neural parameters.
We infer the posteriors over such latent variables based on data from seen task-language combinations.
Our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods.
arXiv Detail & Related papers (2020-01-30T16:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.