Analyzing Wrap-Up Effects through an Information-Theoretic Lens
- URL: http://arxiv.org/abs/2203.17213v2
- Date: Fri, 5 Jan 2024 16:10:15 GMT
- Title: Analyzing Wrap-Up Effects through an Information-Theoretic Lens
- Authors: Clara Meister and Tiago Pimentel and Thomas Hikaru Clark and Ryan
Cotterell and Roger Levy
- Abstract summary: This work examines the relationship between wrap-up effects and information-theoretic quantities.
We find that the distribution of information in prior contexts is often predictive of sentence- and clause-final RTs.
- Score: 96.02309964375983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous analyses of reading time (RT) data have been implemented -- all in
an effort to better understand the cognitive processes driving reading
comprehension. However, data measured on words at the end of a sentence -- or
even at the end of a clause -- is often omitted due to the confounding factors
introduced by so-called "wrap-up effects," which manifests as a skewed
distribution of RTs for these words. Consequently, the understanding of the
cognitive processes that might be involved in these wrap-up effects is limited.
In this work, we attempt to learn more about these processes by examining the
relationship between wrap-up effects and information-theoretic quantities, such
as word and context surprisals. We find that the distribution of information in
prior contexts is often predictive of sentence- and clause-final RTs (while not
of sentence-medial RTs). This lends support to several prior hypotheses about
the processes involved in wrap-up effects.
Related papers
- Interpretable Multimodal Out-of-context Detection with Soft Logic Regularization [21.772064939915214]
We propose a logic regularization approach for out-of-context detection called LOGRAN.
The primary objective of LOGRAN is to decompose the out-of-context detection at the phrase level.
We evaluate the performance of LOGRAN on the NewsCLIPpings dataset, showcasing competitive overall results.
arXiv Detail & Related papers (2024-06-07T08:57:25Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Causal Message Passing: A Method for Experiments with Unknown and General Network Interference [5.294604210205507]
We introduce a new framework to accommodate complex and unknown network interference.
Our framework, termed causal message-passing, is grounded in high-dimensional approximate message passing methodology.
We demonstrate the effectiveness of this approach across five numerical scenarios.
arXiv Detail & Related papers (2023-11-14T17:31:50Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - A Mechanistic Interpretation of Arithmetic Reasoning in Language Models
using Causal Mediation Analysis [128.0532113800092]
We present a mechanistic interpretation of Transformer-based LMs on arithmetic questions.
This provides insights into how information related to arithmetic is processed by LMs.
arXiv Detail & Related papers (2023-05-24T11:43:47Z) - Estimating the Adversarial Robustness of Attributions in Text with
Transformers [44.745873282080346]
We establish a novel definition of attribution robustness (AR) in text classification, based on Lipschitz continuity.
We then propose our novel TransformerExplanationAttack (TEA), a strong adversary that provides a tight estimation for attribution in text classification.
arXiv Detail & Related papers (2022-12-18T20:18:59Z) - On the Effect of Anticipation on Reading Times [84.27103313675342]
We operationalize anticipation as a word's contextual entropy.
We find substantial evidence for effects of contextual entropy over surprisal on a word's reading time.
arXiv Detail & Related papers (2022-11-25T18:58:23Z) - Did the Cat Drink the Coffee? Challenging Transformers with Generalized
Event Knowledge [59.22170796793179]
Transformers Language Models (TLMs) were tested on a benchmark for the textitdynamic estimation of thematic fit
Our results show that TLMs can reach performances that are comparable to those achieved by SDM.
However, additional analysis consistently suggests that TLMs do not capture important aspects of event knowledge.
arXiv Detail & Related papers (2021-07-22T20:52:26Z) - ReSCo-CC: Unsupervised Identification of Key Disinformation Sentences [3.7405995078130148]
We propose a novel unsupervised task of identifying sentences containing key disinformation within a document that is known to be untrustworthy.
We design a three-phase statistical NLP solution for the task which starts with embedding sentences within a bespoke feature space designed for the task.
We show that our method is able to identify core disinformation effectively.
arXiv Detail & Related papers (2020-10-21T08:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.