Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans
- URL: http://arxiv.org/abs/2006.11098v2
- Date: Mon, 3 May 2021 06:25:20 GMT
- Title: Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans
- Authors: Yair Lakretz, Dieuwke Hupkes, Alessandra Vergallito, Marco Marelli,
Marco Baroni, Stanislas Dehaene
- Abstract summary: We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
- Score: 75.15855405318855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recursive processing in sentence comprehension is considered a hallmark of
human linguistic abilities. However, its underlying neural mechanisms remain
largely unknown. We studied whether a modern artificial neural network trained
with "deep learning" methods mimics a central aspect of human sentence
processing, namely the storing of grammatical number and gender information in
working memory and its use in long-distance agreement (e.g., capturing the
correct number agreement between subject and verb when they are separated by
other phrases). Although the network, a recurrent architecture with Long
Short-Term Memory units, was solely trained to predict the next word in a large
corpus, analysis showed the emergence of a very sparse set of specialized units
that successfully handled local and long-distance syntactic agreement for
grammatical number. However, the simulations also showed that this mechanism
does not support full recursion and fails with some long-range embedded
dependencies. We tested the model's predictions in a behavioral experiment
where humans detected violations in number agreement in sentences with
systematic variations in the singular/plural status of multiple nouns, with or
without embedding. Human and model error patterns were remarkably similar,
showing that the model echoes various effects observed in human data. However,
a key difference was that, with embedded long-range dependencies, humans
remained above chance level, while the model's systematic errors brought it
below chance. Overall, our study shows that exploring the ways in which modern
artificial neural networks process sentences leads to precise and testable
hypotheses about human linguistic performance.
Related papers
- Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - Meta predictive learning model of languages in neural circuits [2.5690340428649328]
We propose a mean-field learning model within the predictive coding framework.
Our model reveals that most of the connections become deterministic after learning.
Our model provides a starting point to investigate the connection among brain computation, next-token prediction and general intelligence.
arXiv Detail & Related papers (2023-09-08T03:58:05Z) - Modeling Target-Side Morphology in Neural Machine Translation: A
Comparison of Strategies [72.56158036639707]
Morphologically rich languages pose difficulties to machine translation.
A large amount of differently inflected word surface forms entails a larger vocabulary.
Some inflected forms of infrequent terms typically do not appear in the training corpus.
Linguistic agreement requires the system to correctly match the grammatical categories between inflected word forms in the output sentence.
arXiv Detail & Related papers (2022-03-25T10:13:20Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Demystifying Neural Language Models' Insensitivity to Word-Order [7.72780997900827]
We investigate the insensitivity of natural language models to word-order by quantifying perturbations.
We find that neural language models require local ordering more so than the global ordering of tokens.
arXiv Detail & Related papers (2021-07-29T13:34:20Z) - Preliminary study on using vector quantization latent spaces for TTS/VC
systems with consistent performance [55.10864476206503]
We investigate the use of quantized vectors to model the latent linguistic embedding.
By enforcing different policies over the latent spaces in the training, we are able to obtain a latent linguistic embedding.
Our experiments show that the voice cloning system built with vector quantization has only a small degradation in terms of perceptive evaluations.
arXiv Detail & Related papers (2021-06-25T07:51:35Z) - A Targeted Assessment of Incremental Processing in Neural LanguageModels
and Humans [2.7624021966289605]
We present a scaled-up comparison of incremental processing in humans and neural language models.
Data comes from a novel online experimental paradigm called the Interpolated Maze task.
We find that both humans and language models show increased processing difficulty in ungrammatical sentence regions.
arXiv Detail & Related papers (2021-06-06T20:04:39Z) - Analyzing Individual Neurons in Pre-trained Language Models [41.07850306314594]
We find small subsets of neurons to predict linguistic tasks, with lower level tasks localized in fewer neurons, compared to higher level task of predicting syntax.
For example, we found neurons in XLNet to be more localized and disjoint when predicting properties compared to BERT and others, where they are more distributed and coupled.
arXiv Detail & Related papers (2020-10-06T13:17:38Z) - Neural Baselines for Word Alignment [0.0]
We study and evaluate neural models for unsupervised word alignment for four language pairs.
We show that neural versions of the IBM-1 and hidden Markov models vastly outperform their discrete counterparts.
arXiv Detail & Related papers (2020-09-28T07:51:03Z) - Multi-timescale Representation Learning in LSTM Language Models [69.98840820213937]
Language models must capture statistical dependencies between words at timescales ranging from very short to very long.
We derived a theory for how the memory gating mechanism in long short-term memory language models can capture power law decay.
Experiments showed that LSTM language models trained on natural English text learn to approximate this theoretical distribution.
arXiv Detail & Related papers (2020-09-27T02:13:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.