Analyzing Wrap-Up Effects through an Information-Theoretic Lens
- URL: http://arxiv.org/abs/2203.17213v2
- Date: Fri, 5 Jan 2024 16:10:15 GMT
- Title: Analyzing Wrap-Up Effects through an Information-Theoretic Lens
- Authors: Clara Meister and Tiago Pimentel and Thomas Hikaru Clark and Ryan
Cotterell and Roger Levy
- Abstract summary: This work examines the relationship between wrap-up effects and information-theoretic quantities.
We find that the distribution of information in prior contexts is often predictive of sentence- and clause-final RTs.
- Score: 96.02309964375983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous analyses of reading time (RT) data have been implemented -- all in
an effort to better understand the cognitive processes driving reading
comprehension. However, data measured on words at the end of a sentence -- or
even at the end of a clause -- is often omitted due to the confounding factors
introduced by so-called "wrap-up effects," which manifests as a skewed
distribution of RTs for these words. Consequently, the understanding of the
cognitive processes that might be involved in these wrap-up effects is limited.
In this work, we attempt to learn more about these processes by examining the
relationship between wrap-up effects and information-theoretic quantities, such
as word and context surprisals. We find that the distribution of information in
prior contexts is often predictive of sentence- and clause-final RTs (while not
of sentence-medial RTs). This lends support to several prior hypotheses about
the processes involved in wrap-up effects.
Related papers
- The Effect of Surprisal on Reading Times in Information Seeking and Repeated Reading [1.2062053320259833]
We use eyetracking data to examine three language processing regimes that are common in daily life.
We find that the prediction of surprisal theory regarding the presence of a linear effect of surprisal on processing times, extends to these regimes.
In information seeking, such estimates do not improve the predictive power of processing times compared to standard surprisals.
arXiv Detail & Related papers (2024-10-10T17:43:34Z) - On the Role of Context in Reading Time Prediction [50.87306355705826]
We present a new perspective on how readers integrate context during real-time language comprehension.
Our proposals build on surprisal theory, which posits that the processing effort of a linguistic unit is an affine function of its in-context information content.
arXiv Detail & Related papers (2024-09-12T15:52:22Z) - Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding [118.75567341513897]
Existing methods typically analyze target text in isolation or solely with non-member contexts.
We propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts.
arXiv Detail & Related papers (2024-09-05T09:10:38Z) - Interpretable Multimodal Out-of-context Detection with Soft Logic Regularization [21.772064939915214]
We propose a logic regularization approach for out-of-context detection called LOGRAN.
The primary objective of LOGRAN is to decompose the out-of-context detection at the phrase level.
We evaluate the performance of LOGRAN on the NewsCLIPpings dataset, showcasing competitive overall results.
arXiv Detail & Related papers (2024-06-07T08:57:25Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Causal Message Passing for Experiments with Unknown and General Network Interference [5.294604210205507]
We introduce a new framework to accommodate complex and unknown network interference.
Our framework, termed causal message-passing, is grounded in high-dimensional approximate message passing methodology.
We demonstrate the effectiveness of this approach across five numerical scenarios.
arXiv Detail & Related papers (2023-11-14T17:31:50Z) - On the Effect of Anticipation on Reading Times [84.27103313675342]
We operationalize anticipation as a word's contextual entropy.
We find substantial evidence for effects of contextual entropy over surprisal on a word's reading time.
arXiv Detail & Related papers (2022-11-25T18:58:23Z) - Did the Cat Drink the Coffee? Challenging Transformers with Generalized
Event Knowledge [59.22170796793179]
Transformers Language Models (TLMs) were tested on a benchmark for the textitdynamic estimation of thematic fit
Our results show that TLMs can reach performances that are comparable to those achieved by SDM.
However, additional analysis consistently suggests that TLMs do not capture important aspects of event knowledge.
arXiv Detail & Related papers (2021-07-22T20:52:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.