Controlling Reading Ease with Gaze-Guided Text Generation
- URL: http://arxiv.org/abs/2601.17781v1
- Date: Sun, 25 Jan 2026 10:42:57 GMT
- Title: Controlling Reading Ease with Gaze-Guided Text Generation
- Authors: Andreas Säuberli, Darja Jepifanova, Diego Frassinelli, Barbara Plank,
- Abstract summary: We use a model that predicts human gaze patterns to steer language model outputs towards eliciting certain reading behaviors.<n>We evaluate the approach in an eye-tracking experiment with native and non-native speakers of English.
- Score: 37.556636987304124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The way our eyes move while reading can tell us about the cognitive effort required to process the text. In the present study, we use this fact to generate texts with controllable reading ease. Our method employs a model that predicts human gaze patterns to steer language model outputs towards eliciting certain reading behaviors. We evaluate the approach in an eye-tracking experiment with native and non-native speakers of English. The results demonstrate that the method is effective at making the generated texts easier or harder to read, measured both in terms of reading times and perceived difficulty of the texts. A statistical analysis reveals that the changes in reading behavior are mostly due to features that affect lexical processing. Possible applications of our approach include text simplification for information accessibility and generation of personalized educational material for language learning.
Related papers
- Readability Formulas, Systems and LLMs are Poor Predictors of Reading Ease [4.868319717279586]
We focus on a fundamental and understudied aspect of readability, real-time reading ease, captured with online reading measures using eye tracking.<n>Applying this evaluation to prominent traditional readability formulas, modern machine learning systems and commercial systems used in education, suggests that they are all poor predictors of reading ease in English.
arXiv Detail & Related papers (2025-02-16T14:51:44Z) - Unknown Word Detection for English as a Second Language (ESL) Learners Using Gaze and Pre-trained Language Models [24.607431783798425]
This paper presents EyeLingo, a transformer-based machine learning method that predicts the probability of unknown words based on text content and eye gaze trajectory in real time with high accuracy.<n>A 20-participant user study revealed that our method can achieve an accuracy of 97.6%, and an F1-score of 71.1%.
arXiv Detail & Related papers (2025-02-14T18:57:04Z) - Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - Generating Summaries with Controllable Readability Levels [67.34087272813821]
Several factors affect the readability level, such as the complexity of the text, its subject matter, and the reader's background knowledge.
Current text generation approaches lack refined control, resulting in texts that are not customized to readers' proficiency levels.
We develop three text generation techniques for controlling readability: instruction-based readability control, reinforcement learning to minimize the gap between requested and observed readability, and a decoding approach that uses look-ahead to estimate the readability of upcoming decoding steps.
arXiv Detail & Related papers (2023-10-16T17:46:26Z) - LC-Score: Reference-less estimation of Text Comprehension Difficulty [0.0]
We present textscLC-Score, a simple approach for training text comprehension metric for any French text without reference.
Our objective is to quantitatively capture the extend to which a text suits to the textitLangage Clair (LC, textitClear Language) guidelines.
We explore two approaches: (i) using linguistically motivated indicators used to train statistical models, and (ii) neural learning directly from text leveraging pre-trained language models.
arXiv Detail & Related papers (2023-10-04T11:49:37Z) - Reading and Writing: Discriminative and Generative Modeling for
Self-Supervised Text Recognition [101.60244147302197]
We introduce contrastive learning and masked image modeling to learn discrimination and generation of text images.
Our method outperforms previous self-supervised text recognition methods by 10.2%-20.2% on irregular scene text recognition datasets.
Our proposed text recognizer exceeds previous state-of-the-art text recognition methods by averagely 5.3% on 11 benchmarks, with similar model size.
arXiv Detail & Related papers (2022-07-01T03:50:26Z) - Bridging between Cognitive Processing Signals and Linguistic Features
via a Unified Attentional Network [25.235060468310696]
We propose a data-driven method to investigate the relationship between cognitive processing signals and linguistic features.
We present a unified attentional framework that is composed of embedding, attention, encoding and predicting layers.
The proposed framework can be used to detect a wide range of linguistic features with a single cognitive dataset.
arXiv Detail & Related papers (2021-12-16T12:25:11Z) - Predicting Text Readability from Scrolling Interactions [6.530293714772306]
This paper investigates how scrolling behaviour relates to the readability of a text.
We make our dataset publicly available and show that there are statistically significant differences in the way readers interact with text depending on the text level.
arXiv Detail & Related papers (2021-05-13T15:27:00Z) - Improving Disentangled Text Representation Learning with
Information-Theoretic Guidance [99.68851329919858]
discrete nature of natural language makes disentangling of textual representations more challenging.
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text.
Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation.
arXiv Detail & Related papers (2020-06-01T03:36:01Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.