Testing the Predictions of Surprisal Theory in 11 Languages
- URL: http://arxiv.org/abs/2307.03667v3
- Date: Tue, 10 Sep 2024 19:29:48 GMT
- Title: Testing the Predictions of Surprisal Theory in 11 Languages
- Authors: Ethan Gotlieb Wilcox, Tiago Pimentel, Clara Meister, Ryan Cotterell, Roger P. Levy,
- Abstract summary: We investigate the relationship between surprisal and reading times in eleven different languages.
By focusing on a more diverse set of languages, we argue that these results offer the most robust link to-date between information theory and incremental language processing across languages.
- Score: 77.45204595614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental result in psycholinguistics is that less predictable words take a longer time to process. One theoretical explanation for this finding is Surprisal Theory (Hale, 2001; Levy, 2008), which quantifies a word's predictability as its surprisal, i.e. its negative log-probability given a context. While evidence supporting the predictions of Surprisal Theory have been replicated widely, most have focused on a very narrow slice of data: native English speakers reading English texts. Indeed, no comprehensive multilingual analysis exists. We address this gap in the current literature by investigating the relationship between surprisal and reading times in eleven different languages, distributed across five language families. Deriving estimates from language models trained on monolingual and multilingual corpora, we test three predictions associated with surprisal theory: (i) whether surprisal is predictive of reading times; (ii) whether expected surprisal, i.e. contextual entropy, is predictive of reading times; (iii) and whether the linking function between surprisal and reading times is linear. We find that all three predictions are borne out crosslinguistically. By focusing on a more diverse set of languages, we argue that these results offer the most robust link to-date between information theory and incremental language processing across languages.
Related papers
- On the Proper Treatment of Tokenization in Psycholinguistics [53.960910019072436]
The paper argues that token-level language models should be marginalized into character-level language models before they are used in psycholinguistic studies.
We find various focal areas whose surprisal is a better psychometric predictor than the surprisal of the region of interest itself.
arXiv Detail & Related papers (2024-10-03T17:18:03Z) - On the Role of Context in Reading Time Prediction [50.87306355705826]
We present a new perspective on how readers integrate context during real-time language comprehension.
Our proposals build on surprisal theory, which posits that the processing effort of a linguistic unit is an affine function of its in-context information content.
arXiv Detail & Related papers (2024-09-12T15:52:22Z) - SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser [20.83447226847776]
We use SPAWN to generate priming predictions from two theories about the structure of English relative clauses.
We found that while the some of the predictions from the Participial-Phase theory aligned with human behavior, none of the predictions from the the Whiz-Deletion theory did.
arXiv Detail & Related papers (2024-03-11T22:58:58Z) - On the Effect of Anticipation on Reading Times [84.27103313675342]
We operationalize anticipation as a word's contextual entropy.
We find substantial evidence for effects of contextual entropy over surprisal on a word's reading time.
arXiv Detail & Related papers (2022-11-25T18:58:23Z) - Syntactic Surprisal From Neural Models Predicts, But Underestimates,
Human Processing Difficulty From Syntactic Ambiguities [19.659811811023374]
We propose a method for estimating syntactic predictability from a language model.
We find that treating syntactic predictability independently from lexical predictability indeed results in larger estimates of garden path effects.
Our results support the hypothesis that predictability is not the only factor responsible for the processing cost associated with garden path sentences.
arXiv Detail & Related papers (2022-10-21T18:30:56Z) - Long-range and hierarchical language predictions in brains and
algorithms [82.81964713263483]
We show that while deep language algorithms are optimized to predict adjacent words, the human brain would be tuned to make long-range and hierarchical predictions.
This study strengthens predictive coding theory and suggests a critical role of long-range and hierarchical predictions in natural language processing.
arXiv Detail & Related papers (2021-11-28T20:26:07Z) - Revisiting the Uniform Information Density Hypothesis [44.277066511088634]
We investigate the uniform information density (UID) hypothesis using reading time and acceptability data.
For acceptability judgments, we find clearer evidence that non-uniformity in information density is predictive of lower acceptability.
arXiv Detail & Related papers (2021-09-23T20:41:47Z) - You Can Do Better! If You Elaborate the Reason When Making Prediction [13.658942796267015]
This paper proposes a novel neural predictive framework coupled with large pre-trained language models to make a prediction and generate its corresponding explanation simultaneously.
We conducted a preliminary empirical study on Chinese medical multiple-choice question answering, English natural language inference and commonsense question answering tasks.
The proposed method also achieves improved prediction accuracy on three datasets, which indicates that making predictions can benefit from generating the explanation in the decision process.
arXiv Detail & Related papers (2021-03-27T14:55:19Z) - Multi-timescale Representation Learning in LSTM Language Models [69.98840820213937]
Language models must capture statistical dependencies between words at timescales ranging from very short to very long.
We derived a theory for how the memory gating mechanism in long short-term memory language models can capture power law decay.
Experiments showed that LSTM language models trained on natural English text learn to approximate this theoretical distribution.
arXiv Detail & Related papers (2020-09-27T02:13:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.