Assessing the influence of attractor-verb distance on grammatical
agreement in humans and language models
- URL: http://arxiv.org/abs/2311.16978v1
- Date: Tue, 28 Nov 2023 17:25:34 GMT
- Title: Assessing the influence of attractor-verb distance on grammatical
agreement in humans and language models
- Authors: Christos-Nikolaos Zacharopoulos, Th\'eo Desbordes, Mathias
Sabl\'e-Meyer
- Abstract summary: Subject-verb agreement in the presence of an attractor noun located between the main noun and the verb elicits complex behavior.
We modulate the distance between the attractor and the verb while keeping the length of the sentence equal.
We report a linear effect of attractor distance on reaction times.
- Score: 0.2934352211707039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Subject-verb agreement in the presence of an attractor noun located between
the main noun and the verb elicits complex behavior: judgments of
grammaticality are modulated by the grammatical features of the attractor. For
example, in the sentence "The girl near the boys likes climbing", the attractor
(boys) disagrees in grammatical number with the verb (likes), creating a
locally implausible transition probability. Here, we parametrically modulate
the distance between the attractor and the verb while keeping the length of the
sentence equal. We evaluate the performance of both humans and two artificial
neural network models: both make more mistakes when the attractor is closer to
the verb, but neural networks get close to the chance level while humans are
mostly able to overcome the attractor interference. Additionally, we report a
linear effect of attractor distance on reaction times. We hypothesize that a
possible reason for the proximity effect is the calculation of transition
probabilities between adjacent words. Nevertheless, classical models of
attraction such as the cue-based model might suffice to explain this
phenomenon, thus paving the way for new research. Data and analyses available
at https://osf.io/d4g6k
Related papers
- The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - Probabilistic Transformer: A Probabilistic Dependency Model for
Contextual Word Representation [52.270712965271656]
We propose a new model of contextual word representation, not from a neural perspective, but from a purely syntactic and probabilistic perspective.
We find that the graph of our model resembles transformers, with correspondences between dependencies and self-attention.
Experiments show that our model performs competitively to transformers on small to medium sized datasets.
arXiv Detail & Related papers (2023-11-26T06:56:02Z) - Minimal Effective Theory for Phonotactic Memory: Capturing Local
Correlations due to Errors in Speech [0.0]
Local phonetic correlations in spoken words facilitate the learning of spoken words by reducing their information content.
We do this by constructing a locally-connected tensor-network model, inspired by similar variational models used for many-body physics.
The model is therefore a minimal model of phonetic memory, where "learning to pronounce" and "learning a word" are one and the same.
arXiv Detail & Related papers (2023-09-04T22:11:26Z) - Language Models Can Learn Exceptions to Syntactic Rules [22.810889064523167]
We show that artificial neural networks can generalize productively to novel contexts.
We also show that the relative acceptability of a verb in the active vs. passive voice is positively correlated with the relative frequency of its occurrence in those voices.
arXiv Detail & Related papers (2023-06-09T15:35:11Z) - Neighboring Words Affect Human Interpretation of Saliency Explanations [65.29015910991261]
Word-level saliency explanations are often used to communicate feature-attribution in text-based models.
Recent studies found that superficial factors such as word length can distort human interpretation of the communicated saliency scores.
We investigate how the marking of a word's neighboring words affect the explainee's perception of the word's importance in the context of a saliency explanation.
arXiv Detail & Related papers (2023-05-04T09:50:25Z) - Shapley Head Pruning: Identifying and Removing Interference in
Multilingual Transformers [54.4919139401528]
We show that it is possible to reduce interference by identifying and pruning language-specific parameters.
We show that removing identified attention heads from a fixed model improves performance for a target language on both sentence classification and structural prediction.
arXiv Detail & Related papers (2022-10-11T18:11:37Z) - Naturalistic Causal Probing for Morpho-Syntax [76.83735391276547]
We suggest a naturalistic strategy for input-level intervention on real world data in Spanish.
Using our approach, we isolate morpho-syntactic features from counfounders in sentences.
We apply this methodology to analyze causal effects of gender and number on contextualized representations extracted from pre-trained models.
arXiv Detail & Related papers (2022-05-14T11:47:58Z) - Accounting for Agreement Phenomena in Sentence Comprehension with
Transformer Language Models: Effects of Similarity-based Interference on
Surprisal and Attention [4.103438743479001]
We advance an explanation of similarity-based interference effects in subject-verb and reflexive pronoun agreement processing.
We show that surprisal of the verb or reflexive pronoun predicts facilitatory interference effects in ungrammatical sentences.
arXiv Detail & Related papers (2021-04-26T20:46:54Z) - Word Frequency Does Not Predict Grammatical Knowledge in Language Models [2.1984302611206537]
We investigate whether there are systematic sources of variation in the language models' accuracy.
We find that certain nouns are systematically understood better than others, an effect which is robust across grammatical tasks and different language models.
We find that a novel noun's grammatical properties can be few-shot learned from various types of training data.
arXiv Detail & Related papers (2020-10-26T19:51:36Z) - On the Relationships Between the Grammatical Genders of Inanimate Nouns
and Their Co-Occurring Adjectives and Verbs [57.015586483981885]
We use large-scale corpora in six different gendered languages.
We find statistically significant relationships between the grammatical genders of inanimate nouns and the verbs that take those nouns as direct objects, indirect objects, and as subjects.
arXiv Detail & Related papers (2020-05-03T22:49:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.