The EOS Decision and Length Extrapolation
- URL: http://arxiv.org/abs/2010.07174v1
- Date: Wed, 14 Oct 2020 15:46:17 GMT
- Title: The EOS Decision and Length Extrapolation
- Authors: Benjamin Newman, John Hewitt, Percy Liang, Christopher D. Manning
- Abstract summary: Extrapolation to unseen sequence lengths is a challenge for neural generative models of language.
We study an oracle setting to compare the length-extrapolative behavior of networks trained to predict EOS (+EOS) with networks not trained to (-EOS)
We find that -EOS substantially outperforms +EOS, for example extrapolating well to lengths 10 times longer than those seen at training time in a bracket closing task.
- Score: 103.7271774593922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extrapolation to unseen sequence lengths is a challenge for neural generative
models of language. In this work, we characterize the effect on length
extrapolation of a modeling decision often overlooked: predicting the end of
the generative process through the use of a special end-of-sequence (EOS)
vocabulary item. We study an oracle setting - forcing models to generate to the
correct sequence length at test time - to compare the length-extrapolative
behavior of networks trained to predict EOS (+EOS) with networks not trained to
(-EOS). We find that -EOS substantially outperforms +EOS, for example
extrapolating well to lengths 10 times longer than those seen at training time
in a bracket closing task, as well as achieving a 40% improvement over +EOS in
the difficult SCAN dataset length generalization task. By comparing the hidden
states and dynamics of -EOS and +EOS models, we observe that +EOS models fail
to generalize because they (1) unnecessarily stratify their hidden states by
their linear position is a sequence (structures we call length manifolds) or
(2) get stuck in clusters (which we refer to as length attractors) once the EOS
token is the highest-probability prediction.
Related papers
- Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum [30.46329559544246]
We introduce dataset decomposition, a novel variable sequence length training technique.
We train an 8k context-length 1B model at the same cost as a 2k context-length model trained with the baseline approach.
Experiments on a web-scale corpus demonstrate that our approach significantly enhances performance on standard language evaluations and long-context benchmarks.
arXiv Detail & Related papers (2024-05-21T22:26:01Z) - Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective [55.41815486466186]
Large Multimodal Models (LMMs) often suffer from multimodal hallucinations, wherein they create content that is not present in the visual inputs.
In this paper, we explore a new angle of this issue: overly detailed training data hinders the model's ability to timely terminate generation.
We find that the model assesses the completeness of the entire sequence by comparing the generated text with the image.
arXiv Detail & Related papers (2024-02-22T13:33:13Z) - Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images [63.91986621008751]
Large vision-language models (VLMs) have achieved exceptional performance across various multi-modal tasks.
In this paper, we aim to induce high energy-latency cost during inference ofVLMs.
We propose verbose images, with the goal of crafting an imperceptible perturbation to induce VLMs to generate long sentences.
arXiv Detail & Related papers (2024-01-20T08:46:06Z) - DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme
Long Sequence Transformer Models [34.74093040678323]
We introduce DeepSpeed-Ulysses, a novel, portable and effective methodology for enabling highly efficient and scalable LLM training.
DeepSpeed-Ulysses at its core partitions input data along the sequence dimension and employs an efficient all-to-all collective communication for attention.
Experiments show that DeepSpeed-Ulysses trains 2.5x faster with 4x longer sequence length than the existing method SOTA baseline.
arXiv Detail & Related papers (2023-09-25T20:15:57Z) - LongNet: Scaling Transformers to 1,000,000,000 Tokens [146.4077038371075]
LongNet is a Transformer variant that can scale sequence length to more than 1 billion tokens.
Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.
arXiv Detail & Related papers (2023-07-05T17:59:38Z) - Sequence Length is a Domain: Length-based Overfitting in Transformer
Models [0.0]
In machine translation, the neural-based systems perform worse on very long sequences when compared to the preceding phrase-based translation approaches.
We show that the observed drop in performance is due to the hypothesis length corresponding to the lengths seen by the model during training rather than the length of the input sequence.
arXiv Detail & Related papers (2021-09-15T13:25:19Z) - Agile Earth observation satellite scheduling over 20 years:
formulations, methods and future directions [69.47531199609593]
Agile satellites with advanced attitude maneuvering capability are the new generation of Earth observation satellites (EOSs)
The continuous improvement in satellite technology and decrease in launch cost have boosted the development of agile EOSs (AEOSs)
arXiv Detail & Related papers (2020-03-13T09:38:40Z) - Lipreading using Temporal Convolutional Networks [57.41253104365274]
Current model for recognition of isolated words in-the-wild consists of a residual network and Bi-directional Gated Recurrent Unit layers.
We address the limitations of this model and we propose changes which further improve its performance.
Our proposed model results in an absolute improvement of 1.2% and 3.2%, respectively, in these datasets.
arXiv Detail & Related papers (2020-01-23T17:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.