Human Attention during Goal-directed Reading Comprehension Relies on
Task Optimization
- URL: http://arxiv.org/abs/2107.05799v2
- Date: Sun, 23 Apr 2023 01:50:53 GMT
- Title: Human Attention during Goal-directed Reading Comprehension Relies on
Task Optimization
- Authors: Jiajie Zou, Yuran Zhang, Jialu Li, Xing Tian, and Nai Ding
- Abstract summary: Goal-directed reading, i.e., reading a passage to answer a question in mind, is a common real-world task that strongly engages attention.
We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task.
- Score: 8.337095123148186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The computational principles underlying attention allocation in complex
goal-directed tasks remain elusive. Goal-directed reading, i.e., reading a
passage to answer a question in mind, is a common real-world task that strongly
engages attention. Here, we investigate what computational models can explain
attention distribution in this complex task. We show that the reading time on
each word is predicted by the attention weights in transformer-based deep
neural networks (DNNs) optimized to perform the same reading task. Eye-tracking
further reveals that readers separately attend to basic text features and
question-relevant information during first-pass reading and rereading,
respectively. Similarly, text features and question relevance separately
modulate attention weights in shallow and deep DNN layers. Furthermore, when
readers scan a passage without a question in mind, their reading time is
predicted by DNNs optimized for a word prediction task. Therefore, attention
during real-world reading can be interpreted as the consequence of task
optimization.
Related papers
- Fine-Grained Prediction of Reading Comprehension from Eye Movements [1.2062053320259833]
We focus on a fine-grained task of predicting reading comprehension from eye movements at the level of a single question over a passage.
We tackle this task using three new multimodal language models, as well as a battery of prior models from the literature.
The evaluations suggest that although the task is highly challenging, eye movements contain useful signals for fine-grained prediction of reading comprehension.
arXiv Detail & Related papers (2024-10-06T13:55:06Z) - Generalization v.s. Memorization: Tracing Language Models' Capabilities Back to Pretraining Data [76.90128359866462]
Large language models (LLMs) have sparked debate over whether they genuinely generalize to unseen tasks or rely on memorizing vast amounts of pretraining data.
We introduce an extended concept of memorization, distributional memorization, which measures the correlation between the LLM output probabilities and the pretraining data frequency.
This study demonstrates that memorization plays a larger role in simpler, knowledge-intensive tasks, while generalization is the key for harder, reasoning-based tasks.
arXiv Detail & Related papers (2024-07-20T21:24:40Z) - Previously on the Stories: Recap Snippet Identification for Story
Reading [51.641565531840186]
We propose the first benchmark on this useful task called Recap Snippet Identification with a hand-crafted evaluation dataset.
Our experiments show that the proposed task is challenging to PLMs, LLMs, and proposed methods as the task requires a deep understanding of the plot correlation between snippets.
arXiv Detail & Related papers (2024-02-11T18:27:14Z) - Understanding Attention for Vision-and-Language Tasks [4.752823994295959]
We conduct a comprehensive analysis on understanding the role of attention alignment by looking into the attention score calculation methods.
We also analyse the conditions which attention score calculation mechanism would be more (or less) interpretable.
Our analysis is the first of its kind and provides useful insights of the importance of each attention alignment score calculation when applied at the training phase of VL tasks.
arXiv Detail & Related papers (2022-08-17T06:45:07Z) - LadRa-Net: Locally-Aware Dynamic Re-read Attention Net for Sentence
Semantic Matching [66.65398852962177]
We develop a novel Dynamic Re-read Network (DRr-Net) for sentence semantic matching.
We extend DRr-Net to Locally-Aware Dynamic Re-read Attention Net (LadRa-Net)
Experiments on two popular sentence semantic matching tasks demonstrate that DRr-Net can significantly improve the performance of sentence semantic matching.
arXiv Detail & Related papers (2021-08-06T02:07:04Z) - Variational Structured Attention Networks for Deep Visual Representation
Learning [49.80498066480928]
We propose a unified deep framework to jointly learn both spatial attention maps and channel attention in a principled manner.
Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework.
We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters.
arXiv Detail & Related papers (2021-03-05T07:37:24Z) - Narrative Incoherence Detection [76.43894977558811]
We propose the task of narrative incoherence detection as a new arena for inter-sentential semantic understanding.
Given a multi-sentence narrative, decide whether there exist any semantic discrepancies in the narrative flow.
arXiv Detail & Related papers (2020-12-21T07:18:08Z) - Bridging Information-Seeking Human Gaze and Machine Reading
Comprehension [23.153841344989143]
We analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question.
We propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior.
arXiv Detail & Related papers (2020-09-30T16:34:27Z) - Attention based Writer Independent Handwriting Verification [0.0]
We implement and integrate cross-attention and soft-attention mechanisms to capture salient points in feature space of 2D inputs.
We generate meaningful explanations for the provided decision by extracting attention maps from multiple levels of the network.
arXiv Detail & Related papers (2020-09-07T16:28:16Z) - Salience Estimation with Multi-Attention Learning for Abstractive Text
Summarization [86.45110800123216]
In the task of text summarization, salience estimation for words, phrases or sentences is a critical component.
We propose a Multi-Attention Learning framework which contains two new attention learning components for salience estimation.
arXiv Detail & Related papers (2020-04-07T02:38:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.