Fine-Grained Prediction of Reading Comprehension from Eye Movements
- URL: http://arxiv.org/abs/2410.04484v1
- Date: Sun, 6 Oct 2024 13:55:06 GMT
- Title: Fine-Grained Prediction of Reading Comprehension from Eye Movements
- Authors: Omer Shubi, Yoav Meiri, Cfir Avraham Hadar, Yevgeni Berzak,
- Abstract summary: We focus on a fine-grained task of predicting reading comprehension from eye movements at the level of a single question over a passage.
We tackle this task using three new multimodal language models, as well as a battery of prior models from the literature.
The evaluations suggest that although the task is highly challenging, eye movements contain useful signals for fine-grained prediction of reading comprehension.
- Score: 1.2062053320259833
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Can human reading comprehension be assessed from eye movements in reading? In this work, we address this longstanding question using large-scale eyetracking data over textual materials that are geared towards behavioral analyses of reading comprehension. We focus on a fine-grained and largely unaddressed task of predicting reading comprehension from eye movements at the level of a single question over a passage. We tackle this task using three new multimodal language models, as well as a battery of prior models from the literature. We evaluate the models' ability to generalize to new textual items, new participants, and the combination of both, in two different reading regimes, ordinary reading and information seeking. The evaluations suggest that although the task is highly challenging, eye movements contain useful signals for fine-grained prediction of reading comprehension. Code and data will be made publicly available.
Related papers
- Déjà Vu? Decoding Repeated Reading from Eye Movements [1.1652979442763178]
We ask whether it is possible to automatically determine whether the reader has previously encountered a text based on their eye movement patterns.
We introduce two variants of this task and address them with considerable success using both feature-based and neural models.
We present an analysis of model performance which on the one hand yields insights on the information used by the models, and on the other hand leverages predictive modeling as an analytic tool for better characterization of the role of memory in repeated reading.
arXiv Detail & Related papers (2025-02-16T09:59:29Z) - Decoding Reading Goals from Eye Movements [1.3176926720381554]
We examine whether it is possible to distinguish between two types of common reading goals: information seeking and ordinary reading for comprehension.
Using large-scale eye tracking data, we address this task with a wide range of models that cover different architectural and data representation strategies.
We find that accurate predictions can be made in real time, long before the participant finished reading the text.
arXiv Detail & Related papers (2024-10-28T06:40:03Z) - Attention-aware semantic relevance predicting Chinese sentence reading [6.294658916880712]
This study proposes an attention-aware'' approach for computing contextual semantic relevance.
The attention-aware metrics of semantic relevance can more accurately predict fixation durations in Chinese reading tasks.
Our approach underscores the potential of these metrics to advance our comprehension of how humans understand and process language.
arXiv Detail & Related papers (2024-03-27T13:22:38Z) - Towards Open Vocabulary Learning: A Survey [146.90188069113213]
Deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection.
Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training.
This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field.
arXiv Detail & Related papers (2023-06-28T02:33:06Z) - Summarization with Graphical Elements [55.5913491389047]
We propose a new task: summarization with graphical elements.
We collect a high quality human labeled dataset to support research into the task.
arXiv Detail & Related papers (2022-04-15T17:16:41Z) - Leveraging Visual Knowledge in Language Tasks: An Empirical Study on
Intermediate Pre-training for Cross-modal Knowledge Transfer [61.34424171458634]
We study whether integrating visual knowledge into a language model can fill the gap.
Our experiments show that visual knowledge transfer can improve performance in both low-resource and fully supervised settings.
arXiv Detail & Related papers (2022-03-14T22:02:40Z) - Human Attention during Goal-directed Reading Comprehension Relies on
Task Optimization [8.337095123148186]
Goal-directed reading, i.e., reading a passage to answer a question in mind, is a common real-world task that strongly engages attention.
We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task.
arXiv Detail & Related papers (2021-07-13T01:07:22Z) - Interactive Fiction Game Playing as Multi-Paragraph Reading
Comprehension with Reinforcement Learning [94.50608198582636]
Interactive Fiction (IF) games with real human-written natural language texts provide a new natural evaluation for language understanding techniques.
We take a novel perspective of IF game solving and re-formulate it as Multi-Passage Reading (MPRC) tasks.
arXiv Detail & Related papers (2020-10-05T23:09:20Z) - Bridging Information-Seeking Human Gaze and Machine Reading
Comprehension [23.153841344989143]
We analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question.
We propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior.
arXiv Detail & Related papers (2020-09-30T16:34:27Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.