Next-token prediction capacity: general upper bounds and a lower bound for transformers
- URL: http://arxiv.org/abs/2405.13718v2
- Date: Tue, 17 Sep 2024 00:13:09 GMT
- Title: Next-token prediction capacity: general upper bounds and a lower bound for transformers
- Authors: Liam Madden, Curtis Fox, Christos Thrampoulidis,
- Abstract summary: We show that a decoder-only transformer can interpolate next-token distributions for context sequences.
We show that the minimal number of parameters for memorization is sufficient for being able to train the model to the entropy lower bound.
- Score: 24.31928133575083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a sequence of tokens, such as words, the task of next-token prediction is to predict the next-token conditional probability distribution. Decoder-only transformers have become effective models for this task, but their properties are still not fully understood. In particular, the largest number of distinct context sequences that a decoder-only transformer can interpolate next-token distributions for has not been established. To fill this gap, we prove upper and lower bounds on this number, which are equal up to a multiplicative constant. We prove these bounds in the general setting where next-token distributions can be arbitrary as well as the empirical setting where they are calculated from a finite number of document sequences. Our lower bounds are for one-layer multi-head decoder-only transformers and our proofs highlight an important injectivity property satisfied by self-attention. Furthermore, we provide numerical evidence that the minimal number of parameters for memorization is sufficient for being able to train the model to the entropy lower bound.
Related papers
- Looking Beyond The Top-1: Transformers Determine Top Tokens In Order [13.032106683136394]
We analyze the computation performed by Transformers in the layers after the top-1 prediction has become fixed.
We find that these saturation events happen in order of the corresponding tokens' ranking.
We propose an underlying mechanism of task transition for this sequential saturation.
arXiv Detail & Related papers (2024-10-26T16:00:38Z) - Transformers are Universal In-context Learners [21.513210412394965]
We show that deep transformers can approximate continuous in-context mappings to arbitrary precision, uniformly over compact token domains.
A key aspect of our results, compared to existing findings, is that for a fixed precision, a single transformer can operate on an arbitrary (even infinite) number of tokens.
arXiv Detail & Related papers (2024-08-02T16:21:48Z) - Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion [61.03681839276652]
Diffusion Forcing is a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels.
We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens.
arXiv Detail & Related papers (2024-07-01T15:43:25Z) - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration [54.897493351694195]
We propose a novel parallel decoding approach, namely textithidden transfer, which decodes multiple successive tokens simultaneously in a single forward pass.
In terms of acceleration metrics, we outperform all the single-model acceleration techniques, including Medusa and Self-Speculative decoding.
arXiv Detail & Related papers (2024-04-18T09:17:06Z) - σ-GPTs: A New Approach to Autoregressive Models [19.84252724050016]
We show that by simply adding a positional encoding for the output, this order can be modulated on-the-fly per-sample.
We evaluate our method across various domains, including language modeling, path-solving, and aircraft vertical rate prediction.
arXiv Detail & Related papers (2024-04-15T08:22:47Z) - Predicting generalization performance with correctness discriminators [64.00420578048855]
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
We show across a variety of tagging, parsing, and semantic parsing tasks that the gold accuracy is reliably between the predicted upper and lower bounds.
arXiv Detail & Related papers (2023-11-15T22:43:42Z) - An Attribution Method for Siamese Encoders [2.1163800956183776]
This paper derives a local attribution method for Siamese encoders by generalizing the principle of integrated gradients to models with multiple inputs.
A pilot study shows that in an ST few token-pairs can often explain large fractions of predictions, and it focuses on nouns and verbs.
arXiv Detail & Related papers (2023-10-09T13:24:44Z) - Universality and Limitations of Prompt Tuning [65.8354898840308]
We take one of the first steps to understand the role of soft-prompt tuning for transformer-based architectures.
We analyze prompt tuning from the lens of universality and limitations with finite-depth pretrained transformers for continuous-valued functions.
Our result guarantees the existence of a strong transformer with a prompt to approximate any sequence-to-sequence function in the set of Lipschitz functions.
arXiv Detail & Related papers (2023-05-30T06:47:07Z) - Addressing Token Uniformity in Transformers via Singular Value
Transformation [24.039280291845706]
Token uniformity is commonly observed in transformer-based models.
We show that a less skewed singular value distribution can alleviate the token uniformity' problem.
arXiv Detail & Related papers (2022-08-24T22:44:09Z) - Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing [112.2208052057002]
We propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one.
With comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks.
arXiv Detail & Related papers (2020-06-05T05:16:23Z) - Addressing Some Limitations of Transformers with Feedback Memory [51.94640029417114]
Transformers have been successfully applied to sequential, auto-regressive tasks despite being feedforward networks.
We propose the Feedback Transformer architecture that exposes all previous representations to all future representations.
We demonstrate on a variety of benchmarks in language modeling, machine translation, and reinforcement learning that the increased representation capacity can create small, shallow models with much stronger performance than comparable Transformers.
arXiv Detail & Related papers (2020-02-21T16:37:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.