In-Context Language Learning: Architectures and Algorithms
- URL: http://arxiv.org/abs/2401.12973v2
- Date: Tue, 30 Jan 2024 18:59:34 GMT
- Title: In-Context Language Learning: Architectures and Algorithms
- Authors: Ekin Aky\"urek, Bailin Wang, Yoon Kim, Jacob Andreas
- Abstract summary: We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
- Score: 73.93205821154605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale neural language models exhibit a remarkable capacity for
in-context learning (ICL): they can infer novel functions from datasets
provided as input. Most of our current understanding of when and how ICL arises
comes from LMs trained on extremely simple learning problems like linear
regression and associative recall. There remains a significant gap between
these model problems and the "real" ICL exhibited by LMs trained on large text
corpora, which involves not just retrieval and function approximation but
free-form generation of language and other structured outputs. In this paper,
we study ICL through the lens of a new family of model problems we term in
context language learning (ICLL). In ICLL, LMs are presented with a set of
strings from a formal language, and must generate additional strings from the
same language. We focus on in-context learning of regular languages generated
by random finite automata. We evaluate a diverse set of neural sequence models
(including several RNNs, Transformers, and state-space model variants) on
regular ICLL tasks, aiming to answer three questions: (1) Which model classes
are empirically capable of ICLL? (2) What algorithmic solutions do successful
models implement to perform ICLL? (3) What architectural changes can improve
ICLL in less performant models? We first show that Transformers significantly
outperform neural sequence models with recurrent or convolutional
representations on ICLL tasks. Next, we provide evidence that their ability to
do so relies on specialized "n-gram heads" (higher-order variants of induction
heads) that compute input-conditional next-token distributions. Finally, we
show that hard-wiring these heads into neural models improves performance not
just on ICLL, but natural language modeling -- improving the perplexity of
340M-parameter models by up to 1.14 points (6.7%) on the SlimPajama dataset.
Related papers
- Scaling Laws for Linear Complexity Language Models [18.787664489713332]
We present the scaling laws for linear complexity language models to establish a foundation for their scalability.
The study reveals that existing linear complexity language models exhibit similar scaling capabilities as conventional transformer-based models.
arXiv Detail & Related papers (2024-06-24T14:51:31Z) - Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network [16.317199232071232]
Large Language Models (LLMs) have been shown to be effective models of the human language system.
In this work, we investigate the key architectural components driving the surprising alignment of untrained models.
arXiv Detail & Related papers (2024-06-21T12:54:03Z) - CodeGen2: Lessons for Training LLMs on Programming and Natural Languages [116.74407069443895]
We unify encoder and decoder-based models into a single prefix-LM.
For learning methods, we explore the claim of a "free lunch" hypothesis.
For data distributions, the effect of a mixture distribution and multi-epoch training of programming and natural languages on model performance is explored.
arXiv Detail & Related papers (2023-05-03T17:55:25Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - Pre-Training a Graph Recurrent Network for Language Representation [34.4554387894105]
We consider a graph recurrent network for language model pre-training, which builds a graph structure for each sequence with local token-level communications.
We find that our model can generate more diverse outputs with less contextualized feature redundancy than existing attention-based models.
arXiv Detail & Related papers (2022-09-08T14:12:15Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Logical Natural Language Generation from Open-Domain Tables [107.04385677577862]
We propose a new task where a model is tasked with generating natural language statements that can be emphlogically entailed by the facts.
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset citechen 2019tabfact featured with a wide range of logical/symbolic inferences.
The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.
arXiv Detail & Related papers (2020-04-22T06:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.