The Cascade Transformer: an Application for Efficient Answer Sentence
Selection
- URL: http://arxiv.org/abs/2005.02534v2
- Date: Thu, 7 May 2020 15:07:38 GMT
- Title: The Cascade Transformer: an Application for Efficient Answer Sentence
Selection
- Authors: Luca Soldaini and Alessandro Moschitti
- Abstract summary: We introduce the Cascade Transformer, a technique to adapt transformer-based models into a cascade of rankers.
When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy.
- Score: 116.09532365093659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large transformer-based language models have been shown to be very effective
in many classification tasks. However, their computational complexity prevents
their use in applications requiring the classification of a large set of
candidates. While previous works have investigated approaches to reduce model
size, relatively little attention has been paid to techniques to improve batch
throughput during inference. In this paper, we introduce the Cascade
Transformer, a simple yet effective technique to adapt transformer-based models
into a cascade of rankers. Each ranker is used to prune a subset of candidates
in a batch, thus dramatically increasing throughput at inference time. Partial
encodings from the transformer model are shared among rerankers, providing
further speed-up. When compared to a state-of-the-art transformer model, our
approach reduces computation by 37% with almost no impact on accuracy, as
measured on two English Question Answering datasets.
Related papers
- How Redundant Is the Transformer Stack in Speech Representation Models? [1.3873323883842132]
Self-supervised speech representation models have demonstrated remarkable performance across various tasks such as speech recognition, speaker identification, and emotion detection.
Recent studies on transformer models revealed a high redundancy between layers and the potential for significant pruning.
We demonstrate the effectiveness of pruning transformer-based speech representation models without the need for post-training.
arXiv Detail & Related papers (2024-09-10T11:00:24Z) - Masked Mixers for Language Generation and Retrieval [0.0]
We observe poor input representation accuracy in transformers, but find more accurate representation in masked mixers.
Applied to TinyStories the masked mixer learns causal language tasks more efficiently than early transformer implementations.
We introduce an efficient training approach for retrieval models based on existing generative model embeddings.
arXiv Detail & Related papers (2024-09-02T22:17:18Z) - Paragraph-based Transformer Pre-training for Multi-Sentence Inference [99.59693674455582]
We show that popular pre-trained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks.
We then propose a new pre-training objective that models the paragraph-level semantics across multiple input sentences.
arXiv Detail & Related papers (2022-05-02T21:41:14Z) - A Fast Post-Training Pruning Framework for Transformers [74.59556951906468]
Pruning is an effective way to reduce the huge inference cost of large Transformer models.
Prior work on model pruning requires retraining the model.
We propose a fast post-training pruning framework for Transformers that does not require any retraining.
arXiv Detail & Related papers (2022-03-29T07:41:11Z) - Sparse is Enough in Scaling Transformers [12.561317511514469]
Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach.
We propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer.
arXiv Detail & Related papers (2021-11-24T19:53:46Z) - DoT: An efficient Double Transformer for NLP tasks with tables [3.0079490585515343]
DoT is a double transformer model that decomposes the problem into two sub-tasks.
We show that for a small drop of accuracy, DoT improves training and inference time by at least 50%.
arXiv Detail & Related papers (2021-06-01T13:33:53Z) - Finetuning Pretrained Transformers into RNNs [81.72974646901136]
Transformers have outperformed recurrent neural networks (RNNs) in natural language generation.
A linear-complexity recurrent variant has proven well suited for autoregressive generation.
This work aims to convert a pretrained transformer into its efficient recurrent counterpart.
arXiv Detail & Related papers (2021-03-24T10:50:43Z) - Shortformer: Better Language Modeling using Shorter Inputs [62.51758040848735]
We show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall training time.
We then show how to improve the efficiency of recurrence methods in transformers.
arXiv Detail & Related papers (2020-12-31T18:52:59Z) - Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime
with Search [84.94597821711808]
We extend PoWER-BERT (Goyal et al., 2020) and propose Length-Adaptive Transformer that can be used for various inference scenarios after one-shot training.
We conduct a multi-objective evolutionary search to find a length configuration that maximizes the accuracy and minimizes the efficiency metric under any given computational budget.
We empirically verify the utility of the proposed approach by demonstrating the superior accuracy-efficiency trade-off under various setups.
arXiv Detail & Related papers (2020-10-14T12:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.