A Logic for Expressing Log-Precision Transformers
- URL: http://arxiv.org/abs/2210.02671v6
- Date: Mon, 6 Nov 2023 21:24:57 GMT
- Title: A Logic for Expressing Log-Precision Transformers
- Authors: William Merrill and Ashish Sabharwal
- Abstract summary: We show that any log-precision transformer can be equivalently expressed as a first-order logic sentence.
This is the tightest known upper bound and first logical characterization of log-precision transformers.
- Score: 35.25166532364007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One way to interpret the reasoning power of transformer-based language models
is to describe the types of logical rules they can resolve over some input
text. Recently, Chiang et al. (2023) showed that finite-precision transformers
can be equivalently expressed in a generalization of first-order logic.
However, finite-precision transformers are a weak transformer variant because,
as we show, a single head can only attend to a constant number of tokens and,
in particular, cannot represent uniform attention. Since attending broadly is a
core capability for transformers, we ask whether a minimally more expressive
model that can attend universally can also be characterized in logic. To this
end, we analyze transformers whose forward pass is computed in $\log n$
precision on contexts of length $n$. We prove that any log-precision
transformer can be equivalently expressed as a first-order logic sentence that,
in addition to standard universal and existential quantifiers, may also contain
majority-vote quantifiers. This is the tightest known upper bound and first
logical characterization of log-precision transformers.
Related papers
- Extracting Finite State Machines from Transformers [0.3069335774032178]
We investigate the trainability of transformers trained on regular languages from a mechanistic interpretability perspective.
We empirically find tighter lower bounds on the trainability of transformers, when a finite number of symbols determine the state.
Our mechanistic insight allows us to characterise the regular languages a one-layer transformer can learn with good length generalisation.
arXiv Detail & Related papers (2024-10-08T13:43:50Z) - Can Transformers Learn $n$-gram Language Models? [77.35809823602307]
We study transformers' ability to learn random $n$-gram LMs of two kinds.
We find that classic estimation techniques for $n$-gram LMs such as add-$lambda$ smoothing outperform transformers.
arXiv Detail & Related papers (2024-10-03T21:21:02Z) - On the Expressive Power of a Variant of the Looped Transformer [83.30272757948829]
We design a novel transformer block, dubbed AlgoFormer, to empower transformers with algorithmic capabilities.
The proposed AlgoFormer can achieve significantly higher in algorithm representation when using the same number of parameters.
Some theoretical and empirical results are presented to show that the designed transformer has the potential to be smarter than human-designed algorithms.
arXiv Detail & Related papers (2024-02-21T07:07:54Z) - The Expressive Power of Transformers with Chain of Thought [29.839710738657203]
In practice, transformers can be improved by allowing them to use a "chain of thought" or "scratchpad"
We show that the answer is yes, but the amount of increase depends crucially on the amount of intermediate generation.
Our results also imply that linear steps keep transformer decoders within context-sensitive languages.
arXiv Detail & Related papers (2023-10-11T22:35:18Z) - Tighter Bounds on the Expressivity of Transformer Encoders [9.974865253097127]
We identify a variant of first-order logic with counting quantifiers that is simultaneously an upper bound for fixed-precision transformer encoders and a lower bound for transformer encoders.
This brings us much closer than before to an exact characterization of the languages that transformer encoders recognize.
arXiv Detail & Related papers (2023-01-25T18:05:55Z) - The Parallelism Tradeoff: Limitations of Log-Precision Transformers [29.716269397142973]
We prove that transformers whose arithmetic precision is logarithmic in the number of input tokens can be simulated by constant-depth logspace-uniform threshold circuits.
This provides insight on the power of transformers using known results in complexity theory.
arXiv Detail & Related papers (2022-07-02T03:49:34Z) - On the Power of Saturated Transformers: A View from Circuit Complexity [87.20342701232869]
We show that saturated transformers transcend the limitations of hard-attention transformers.
The jump from hard to saturated attention can be understood as increasing the transformer's effective circuit depth by a factor of $O(log n)$.
arXiv Detail & Related papers (2021-06-30T17:09:47Z) - Scalable Transformers for Neural Machine Translation [86.4530299266897]
Transformer has been widely adopted in Neural Machine Translation (NMT) because of its large capacity and parallel training of sequence generation.
We propose a novel scalable Transformers, which naturally contains sub-Transformers of different scales and have shared parameters.
A three-stage training scheme is proposed to tackle the difficulty of training the scalable Transformers.
arXiv Detail & Related papers (2021-06-04T04:04:10Z) - Segatron: Segment-Aware Transformer for Language Modeling and
Understanding [79.84562707201323]
We propose a segment-aware Transformer (Segatron) to generate better contextual representations from sequential tokens.
We first introduce the segment-aware mechanism to Transformer-XL, which is a popular Transformer-based language model.
We find that our method can further improve the Transformer-XL base model and large model, achieving 17.1 perplexity on the WikiText-103 dataset.
arXiv Detail & Related papers (2020-04-30T17:38:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.