On Limitations of the Transformer Architecture
- URL: http://arxiv.org/abs/2402.08164v2
- Date: Mon, 26 Feb 2024 22:12:37 GMT
- Title: On Limitations of the Transformer Architecture
- Authors: Binghui Peng, Srini Narayanan, Christos Papadimitriou
- Abstract summary: We show that the Transformer layer is incapable of composing functions if the domains of the functions are large enough.
We also point out that several mathematical tasks that are at the core of the so-called compositional tasks thought to be hard for LLMs are unlikely to be solvable by Transformers.
- Score: 15.329285967441372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What are the root causes of hallucinations in large language models (LLMs)?
We use Communication Complexity to prove that the Transformer layer is
incapable of composing functions (e.g., identify a grandparent of a person in a
genealogy) if the domains of the functions are large enough; we show through
examples that this inability is already empirically present when the domains
are quite small. We also point out that several mathematical tasks that are at
the core of the so-called compositional tasks thought to be hard for LLMs are
unlikely to be solvable by Transformers, for large enough instances and
assuming that certain well accepted conjectures in the field of Computational
Complexity are true.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.