On the Effect of Anticipation on Reading Times
- URL: http://arxiv.org/abs/2211.14301v2
- Date: Fri, 14 Jul 2023 09:09:46 GMT
- Title: On the Effect of Anticipation on Reading Times
- Authors: Tiago Pimentel, Clara Meister, Ethan G. Wilcox, Roger Levy, Ryan
Cotterell
- Abstract summary: We operationalize anticipation as a word's contextual entropy.
We find substantial evidence for effects of contextual entropy over surprisal on a word's reading time.
- Score: 84.27103313675342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past two decades, numerous studies have demonstrated how less
predictable (i.e., higher surprisal) words take more time to read. In general,
these studies have implicitly assumed the reading process is purely responsive:
Readers observe a new word and allocate time to process it as required. We
argue that prior results are also compatible with a reading process that is at
least partially anticipatory: Readers could make predictions about a future
word and allocate time to process it based on their expectation. In this work,
we operationalize this anticipation as a word's contextual entropy. We assess
the effect of anticipation on reading by comparing how well surprisal and
contextual entropy predict reading times on four naturalistic reading datasets:
two self-paced and two eye-tracking. Experimentally, across datasets and
analyses, we find substantial evidence for effects of contextual entropy over
surprisal on a word's reading time (RT): in fact, entropy is sometimes better
than surprisal in predicting a word's RT. Spillover effects, however, are
generally not captured by entropy, but only by surprisal. Further, we
hypothesize four cognitive mechanisms through which contextual entropy could
impact RTs -- three of which we are able to design experiments to analyze.
Overall, our results support a view of reading that is not just responsive, but
also anticipatory.
Related papers
- On the Role of Context in Reading Time Prediction [50.87306355705826]
We present a new perspective on how readers integrate context during real-time language comprehension.
Our proposals build on surprisal theory, which posits that the processing effort of a linguistic unit is an affine function of its in-context information content.
arXiv Detail & Related papers (2024-09-12T15:52:22Z) - Language models emulate certain cognitive profiles: An investigation of how predictability measures interact with individual differences [1.942809872918085]
We revisit the predictive power of surprisal and entropy measures estimated from a range of language models (LMs) on data of human reading times.
We investigate if modulating surprisal and entropy relative to cognitive scores increases prediction accuracy of reading times.
Our study finds that in most cases, incorporating cognitive capacities increases predictive power of surprisal and entropy on reading times.
arXiv Detail & Related papers (2024-06-07T14:54:56Z) - Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"? [15.773775387121097]
We show that calibration of large language models typically improves with model size.
We find that temperature-scaling probabilities lead to a systematically better fit to reading times.
arXiv Detail & Related papers (2023-11-15T19:34:06Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Testing the Predictions of Surprisal Theory in 11 Languages [77.45204595614]
We investigate the relationship between surprisal and reading times in eleven different languages.
By focusing on a more diverse set of languages, we argue that these results offer the most robust link to-date between information theory and incremental language processing across languages.
arXiv Detail & Related papers (2023-07-07T15:37:50Z) - Analyzing Wrap-Up Effects through an Information-Theoretic Lens [96.02309964375983]
This work examines the relationship between wrap-up effects and information-theoretic quantities.
We find that the distribution of information in prior contexts is often predictive of sentence- and clause-final RTs.
arXiv Detail & Related papers (2022-03-31T17:41:03Z) - Predicting MOOCs Dropout Using Only Two Easily Obtainable Features from
the First Week's Activities [56.1344233010643]
Several features are considered to contribute towards learner attrition or lack of interest, which may lead to disengagement or total dropout.
This study aims to predict dropout early-on, from the first week, by comparing several machine-learning approaches.
arXiv Detail & Related papers (2020-08-12T10:44:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.