SEAM: An Integrated Activation-Coupled Model of Sentence Processing and
Eye Movements in Reading
- URL: http://arxiv.org/abs/2303.05221v4
- Date: Wed, 20 Dec 2023 08:47:39 GMT
- Title: SEAM: An Integrated Activation-Coupled Model of Sentence Processing and
Eye Movements in Reading
- Authors: Maximilian M. Rabe, Dario Paape, Daniela Mertzen, Shravan Vasishth,
Ralf Engbert
- Abstract summary: We present a model that combines eye-movement control and sentence processing.
This is the first-ever integration of a complete process model of eye-movement control with linguistic dependency completion processes in sentence comprehension.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Models of eye-movement control during reading, developed largely within
psychology, usually focus on visual, attentional, lexical, and motor processes
but neglect post-lexical language processing; by contrast, models of sentence
comprehension processes, developed largely within psycholinguistics, generally
focus only on post-lexical language processes. We present a model that combines
these two research threads, by integrating eye-movement control and sentence
processing. Developing such an integrated model is extremely challenging and
computationally demanding, but such an integration is an important step toward
complete mathematical models of natural language comprehension in reading. We
combine the SWIFT model of eye-movement control (Seelig et al., 2020,
doi:10.1016/j.jmp.2019.102313) with key components of the Lewis and Vasishth
sentence processing model (Lewis & Vasishth, 2005,
doi:10.1207/s15516709cog0000_25). This integration becomes possible, for the
first time, due in part to recent advances in successful parameter
identification in dynamical models, which allows us to investigate profile
log-likelihoods for individual model parameters. We present a fully implemented
proof-of-concept model demonstrating how such an integrated model can be
achieved; our approach includes Bayesian model inference with Markov Chain
Monte Carlo (MCMC) sampling as a key computational tool. The integrated
Sentence-Processing and Eye-Movement Activation-Coupled Model (SEAM) can
successfully reproduce eye movement patterns that arise due to similarity-based
interference in reading. To our knowledge, this is the first-ever integration
of a complete process model of eye-movement control with linguistic dependency
completion processes in sentence comprehension. In future work, this proof of
concept model will need to be evaluated using a comprehensive set of benchmark
data.
Related papers
- EMMA: Efficient Visual Alignment in Multi-Modal LLMs [56.03417732498859]
EMMA is a lightweight cross-modality module designed to efficiently fuse visual and textual encodings.
EMMA boosts performance across multiple tasks by up to 9.3% while significantly improving robustness against hallucinations.
arXiv Detail & Related papers (2024-10-02T23:00:31Z) - Unlocking the Secrets of Linear Complexity Sequence Model from A Unified Perspective [26.479602180023125]
The Linear Complexity Sequence Model (LCSM) unites various sequence modeling techniques with linear complexity.
We segment the modeling processes of these models into three distinct stages: Expand, Oscillation, and Shrink.
We perform experiments to analyze the impact of different stage settings on language modeling and retrieval tasks.
arXiv Detail & Related papers (2024-05-27T17:38:55Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Expedited Training of Visual Conditioned Language Generation via
Redundancy Reduction [61.16125290912494]
$textEVL_textGen$ is a framework designed for the pre-training of visually conditioned language generation models.
We show that our approach accelerates the training of vision-language models by a factor of 5 without a noticeable impact on overall performance.
arXiv Detail & Related papers (2023-10-05T03:40:06Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Open-vocabulary Semantic Segmentation with Frozen Vision-Language Models [39.479912987123214]
Self-supervised learning has exhibited a notable ability to solve a wide range of visual or language understanding tasks.
We introduce Fusioner, with a lightweight, transformer-based fusion module, that pairs the frozen visual representation with language concept.
We show that, the proposed fusion approach is effective to any pair of visual and language models, even those pre-trained on a corpus of uni-modal data.
arXiv Detail & Related papers (2022-10-27T02:57:26Z) - Dialogue Summarization with Supporting Utterance Flow Modeling and Fact
Regularization [58.965859508695225]
We propose an end-to-end neural model for dialogue summarization with two novel modules.
The supporting utterance flow modeling helps to generate a coherent summary by smoothly shifting the focus from the former utterances to the later ones.
The fact regularization encourages the generated summary to be factually consistent with the ground-truth summary during model training.
arXiv Detail & Related papers (2021-08-03T03:09:25Z) - OCHADAI-KYODAI at SemEval-2021 Task 1: Enhancing Model Generalization
and Robustness for Lexical Complexity Prediction [8.066349353140819]
We propose an ensemble model for predicting the lexical complexity of words and multiword expressions.
The model receives as input a sentence with a target word or MWEand outputs its complexity score.
Our model achieved competitive results and ranked among the top-10 systems in both sub-tasks.
arXiv Detail & Related papers (2021-05-12T09:27:46Z) - Early Stage LM Integration Using Local and Global Log-Linear Combination [46.91755970827846]
Sequence-to-sequence models with an implicit alignment mechanism (e.g. attention) are closing the performance gap towards traditional hybrid hidden Markov models (HMM)
One important factor to improve word error rate in both cases is the use of an external language model (LM) trained on large text-only corpora.
We present a novel method for language model integration into implicit-alignment based sequence-to-sequence models.
arXiv Detail & Related papers (2020-05-20T13:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.