Modeling Human Sentence Processing with Left-Corner Recurrent Neural
Network Grammars
- URL: http://arxiv.org/abs/2109.04939v3
- Date: Thu, 5 Oct 2023 10:33:42 GMT
- Title: Modeling Human Sentence Processing with Left-Corner Recurrent Neural
Network Grammars
- Authors: Ryo Yoshida, Hiroshi Noji, Yohei Oseki
- Abstract summary: In computational linguistics, it has been shown that hierarchical structures make language models (LMs) more human-like.
In this paper, we investigated whether hierarchical structures make LMs more human-like, and if so, which parsing strategy is most cognitively plausible.
- Score: 10.232142358698605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In computational linguistics, it has been shown that hierarchical structures
make language models (LMs) more human-like. However, the previous literature
has been agnostic about a parsing strategy of the hierarchical models. In this
paper, we investigated whether hierarchical structures make LMs more
human-like, and if so, which parsing strategy is most cognitively plausible. In
order to address this question, we evaluated three LMs against human reading
times in Japanese with head-final left-branching structures: Long Short-Term
Memory (LSTM) as a sequential model and Recurrent Neural Network Grammars
(RNNGs) with top-down and left-corner parsing strategies as hierarchical
models. Our computational modeling demonstrated that left-corner RNNGs
outperformed top-down RNNGs and LSTM, suggesting that hierarchical and
left-corner architectures are more cognitively plausible than top-down or
sequential architectures. In addition, the relationships between the cognitive
plausibility and (i) perplexity, (ii) parsing, and (iii) beam size will also be
discussed.
Related papers
- Bearing Syntactic Fruit with Stack-Augmented Neural Networks [59.49467149799849]
We show that stack-augmented neural networks are more accurate models of human language acquisition than standard architectures.<n>We also propose a modification to the stack RNN architecture that improves hierarchical generalization.
arXiv Detail & Related papers (2025-11-05T15:30:58Z) - Do Large Language Models Think Like the Brain? Sentence-Level Evidence from fMRI and Hierarchical Embeddings [28.210559128941593]
This study investigates how hierarchical representations in large language models align with the dynamic neural responses during human sentence comprehension.<n>Results show that improvements in model performance drive the evolution of representational architectures toward brain-like hierarchies.
arXiv Detail & Related papers (2025-05-28T16:40:06Z) - Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures [49.19753720526998]
We derive theoretical scaling laws for neural network performance on synthetic datasets.<n>We validate that convolutional networks, whose structure aligns with that of the generative process through locality and weight sharing, enjoy a faster scaling of performance.<n>This finding clarifies the architectural biases underlying neural scaling laws and highlights how representation learning is shaped by the interaction between model architecture and the statistical properties of data.
arXiv Detail & Related papers (2025-05-11T17:44:14Z) - Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - Analysis of Argument Structure Constructions in a Deep Recurrent Language Model [0.0]
We explore the representation and processing of Argument Structure Constructions (ASCs) in a recurrent neural language model.
Our results show that sentence representations form distinct clusters corresponding to the four ASCs across all hidden layers.
This indicates that even a relatively simple, brain-constrained recurrent neural network can effectively differentiate between various construction types.
arXiv Detail & Related papers (2024-08-06T09:27:41Z) - A Truly Joint Neural Architecture for Segmentation and Parsing [15.866519123942457]
Performance of Morphologically Rich Languages (MRLs) is lower than other languages.
Due to high morphological complexity and ambiguity of the space-delimited input tokens, the linguistic units that act as nodes in the tree are not known in advance.
We introduce a joint neural architecture where a lattice-based representation preserving all morphological ambiguity of the input is provided to an arc-factored model, which then solves the morphological and syntactic parsing tasks at once.
arXiv Detail & Related papers (2024-02-04T16:56:08Z) - Unsupervised Chunking with Hierarchical RNN [62.15060807493364]
This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manner.
We present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to model word-to-chunk and chunk-to-sentence compositions.
Experiments on the CoNLL-2000 dataset reveal a notable improvement over existing unsupervised methods, enhancing phrase F1 score by up to 6 percentage points.
arXiv Detail & Related papers (2023-09-10T02:55:12Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings and reasoning mechanisms is a significant challenge.<n>We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.<n>We demonstrate that generative models like GPT can accurately learn and reason over CFG-defined hierarchies and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - Autoregressive Structured Prediction with Language Models [73.11519625765301]
We describe an approach to model structures as sequences of actions in an autoregressive manner with PLMs.
Our approach achieves the new state-of-the-art on all the structured prediction tasks we looked at.
arXiv Detail & Related papers (2022-10-26T13:27:26Z) - Recurrent Neural Networks with Mixed Hierarchical Structures and EM
Algorithm for Natural Language Processing [9.645196221785694]
We develop an approach called the latent indicator layer to identify and learn implicit hierarchical information.
We also develop an EM algorithm to handle the latent indicator layer in training.
We show that the EM-HRNN model with bootstrap training outperforms other RNN-based models in document classification tasks.
arXiv Detail & Related papers (2022-01-21T23:08:33Z) - ORCHARD: A Benchmark For Measuring Systematic Generalization of
Multi-Hierarchical Reasoning [8.004425059996963]
We show that Transformer and LSTM models surprisingly fail in systematic generalization.
We also show that with increased references between hierarchies, Transformer performs no better than random.
arXiv Detail & Related papers (2021-11-28T03:11:37Z) - High-order Semantic Role Labeling [86.29371274587146]
This paper introduces a high-order graph structure for the neural semantic role labeling model.
It enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs.
Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models.
arXiv Detail & Related papers (2020-10-09T15:33:54Z) - How much complexity does an RNN architecture need to learn
syntax-sensitive dependencies? [9.248882589228089]
Long short-term memory (LSTM) networks are capable of encapsulating long-range dependencies.
Simple recurrent networks (SRNs) have generally been less successful at capturing long-range dependencies.
We propose a new architecture, the Decay RNN, which incorporates the decaying nature of neuronal activations.
arXiv Detail & Related papers (2020-05-17T09:13:28Z) - Tree-structured Attention with Hierarchical Accumulation [103.47584968330325]
"Hierarchical Accumulation" encodes parse tree structures into self-attention at constant time complexity.
Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German translation task.
arXiv Detail & Related papers (2020-02-19T08:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.