Bridging Information-Seeking Human Gaze and Machine Reading
Comprehension
- URL: http://arxiv.org/abs/2009.14780v2
- Date: Thu, 15 Oct 2020 16:08:02 GMT
- Title: Bridging Information-Seeking Human Gaze and Machine Reading
Comprehension
- Authors: Jonathan Malmaud, Roger Levy, Yevgeni Berzak
- Abstract summary: We analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question.
We propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior.
- Score: 23.153841344989143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we analyze how human gaze during reading comprehension is
conditioned on the given reading comprehension question, and whether this
signal can be beneficial for machine reading comprehension. To this end, we
collect a new eye-tracking dataset with a large number of participants engaging
in a multiple choice reading comprehension task. Our analysis of this data
reveals increased fixation times over parts of the text that are most relevant
for answering the question. Motivated by this finding, we propose making
automated reading comprehension more human-like by mimicking human
information-seeking reading behavior during reading comprehension. We
demonstrate that this approach leads to performance gains on multiple choice
question answering in English for a state-of-the-art reading comprehension
model.
Related papers
- Fine-Grained Prediction of Reading Comprehension from Eye Movements [1.2062053320259833]
We focus on a fine-grained task of predicting reading comprehension from eye movements at the level of a single question over a passage.
We tackle this task using three new multimodal language models, as well as a battery of prior models from the literature.
The evaluations suggest that although the task is highly challenging, eye movements contain useful signals for fine-grained prediction of reading comprehension.
arXiv Detail & Related papers (2024-10-06T13:55:06Z) - How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - Analyzing Multiple-Choice Reading and Listening Comprehension Tests [0.0]
This work investigates how much of a contextual passage needs to be read in multiple-choice reading based on conversation transcriptions and listening comprehension tests to be able to work out the correct answer.
We find that automated reading comprehension systems can perform significantly better than random with partial or even no access to the context passage.
arXiv Detail & Related papers (2023-07-03T14:55:02Z) - Human Attention during Goal-directed Reading Comprehension Relies on
Task Optimization [8.337095123148186]
Goal-directed reading, i.e., reading a passage to answer a question in mind, is a common real-world task that strongly engages attention.
We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task.
arXiv Detail & Related papers (2021-07-13T01:07:22Z) - Improving Cross-Lingual Reading Comprehension with Self-Training [62.73937175625953]
Current state-of-the-art models even surpass human performance on several benchmarks.
Previous works have revealed the abilities of pre-trained multilingual models for zero-shot cross-lingual reading comprehension.
This paper further utilized unlabeled data to improve the performance.
arXiv Detail & Related papers (2021-05-08T08:04:30Z) - Narrative Incoherence Detection [76.43894977558811]
We propose the task of narrative incoherence detection as a new arena for inter-sentential semantic understanding.
Given a multi-sentence narrative, decide whether there exist any semantic discrepancies in the narrative flow.
arXiv Detail & Related papers (2020-12-21T07:18:08Z) - Relation/Entity-Centric Reading Comprehension [1.0965065178451106]
We study reading comprehension with a focus on understanding entities and their relationships.
We focus on entities and relations because they are typically used to represent the semantics of natural language.
arXiv Detail & Related papers (2020-08-27T06:42:18Z) - Retrospective Reader for Machine Reading Comprehension [90.6069071495214]
Machine reading comprehension (MRC) is an AI challenge that requires machine to determine the correct answers to questions based on a given passage.
When unanswerable questions are involved in the MRC task, an essential verification module called verifier is especially required in addition to the encoder.
This paper devotes itself to exploring better verifier design for the MRC task with unanswerable questions.
arXiv Detail & Related papers (2020-01-27T11:14:34Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.