Dual Mechanism Priming Effects in Hindi Word Order
- URL: http://arxiv.org/abs/2210.13938v1
- Date: Tue, 25 Oct 2022 11:49:22 GMT
- Title: Dual Mechanism Priming Effects in Hindi Word Order
- Authors: Sidharth Ranjan, Marten van Schijndel, Sumeet Agarwal, Rajakrishnan
Rajkumar
- Abstract summary: We test the hypothesis that priming is driven by multiple different sources.
We permute the preverbal constituents of corpus sentences, and then use a logistic regression model to predict which sentences actually occurred in the corpus.
By showing that different priming influences are separable from one another, our results support the hypothesis that multiple different cognitive mechanisms underlie priming.
- Score: 14.88833412862455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Word order choices during sentence production can be primed by preceding
sentences. In this work, we test the DUAL MECHANISM hypothesis that priming is
driven by multiple different sources. Using a Hindi corpus of text productions,
we model lexical priming with an n-gram cache model and we capture more
abstract syntactic priming with an adaptive neural language model. We permute
the preverbal constituents of corpus sentences, and then use a logistic
regression model to predict which sentences actually occurred in the corpus
against artificially generated meaning-equivalent variants. Our results
indicate that lexical priming and lexically-independent syntactic priming
affect complementary sets of verb classes. By showing that different priming
influences are separable from one another, our results support the hypothesis
that multiple different cognitive mechanisms underlie priming.
Related papers
- Word Order's Impacts: Insights from Reordering and Generation Analysis [9.0720895802828]
Existing works have studied the impacts of the order of words within natural text.
Considering this findings, different hypothesis about word order is proposed.
ChatGPT relies on word order to infer, but cannot support or negate the redundancy relations between word order lexical semantics.
arXiv Detail & Related papers (2024-03-18T04:45:44Z) - Probabilistic Transformer: A Probabilistic Dependency Model for
Contextual Word Representation [52.270712965271656]
We propose a new model of contextual word representation, not from a neural perspective, but from a purely syntactic and probabilistic perspective.
We find that the graph of our model resembles transformers, with correspondences between dependencies and self-attention.
Experiments show that our model performs competitively to transformers on small to medium sized datasets.
arXiv Detail & Related papers (2023-11-26T06:56:02Z) - Why can neural language models solve next-word prediction? A
mathematical perspective [53.807657273043446]
We study a class of formal languages that can be used to model real-world examples of English sentences.
Our proof highlights the different roles of the embedding layer and the fully connected component within the neural language model.
arXiv Detail & Related papers (2023-06-20T10:41:23Z) - Rationalizing Predictions by Adversarial Information Calibration [65.19407304154177]
We train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction.
We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
arXiv Detail & Related papers (2023-01-15T03:13:09Z) - Discourse Context Predictability Effects in Hindi Word Order [14.88833412862455]
We investigate how the words and syntactic structures in a sentence influence the word order of the following sentences.
We use a number of discourse-based features and cognitive features to make its predictions, including dependency length, surprisal, and information status.
We find that information status and LSTM-based discourse predictability influence word order choices, especially for non-canonical object-fronted orders.
arXiv Detail & Related papers (2022-10-25T11:53:01Z) - The Causal Structure of Semantic Ambiguities [0.0]
We identify two features: (1) joint plausibility degrees of different possible interpretations, and (2) causal structures according to which certain words play a more substantial role in the processes.
We applied this theory to a dataset of ambiguous phrases extracted from Psycholinguistics literature and their human plausibility collected by us.
arXiv Detail & Related papers (2022-06-14T12:56:34Z) - Naturalistic Causal Probing for Morpho-Syntax [76.83735391276547]
We suggest a naturalistic strategy for input-level intervention on real world data in Spanish.
Using our approach, we isolate morpho-syntactic features from counfounders in sentences.
We apply this methodology to analyze causal effects of gender and number on contextualized representations extracted from pre-trained models.
arXiv Detail & Related papers (2022-05-14T11:47:58Z) - Syntactic Persistence in Language Models: Priming as a Window into
Abstract Language Representations [0.38498574327875945]
We investigate the extent to which modern, neural language models are susceptible to syntactic priming.
We introduce a novel metric and release Prime-LM, a large corpus where we control for various linguistic factors which interact with priming strength.
We report surprisingly strong priming effects when priming with multiple sentences, each with different words and meaning but with identical syntactic structure.
arXiv Detail & Related papers (2021-09-30T10:38:38Z) - Linguistically inspired morphological inflection with a sequence to
sequence model [19.892441884896893]
Our research question is whether a neural network would be capable of learning inflectional morphemes for inflection production.
We are using an inflectional corpus and a single layer seq2seq model to test this hypothesis.
Our character-morpheme-based model creates inflection by predicting the stem character-to-character and the inflectional affixes as character blocks.
arXiv Detail & Related papers (2020-09-04T08:58:42Z) - Modeling Voting for System Combination in Machine Translation [92.09572642019145]
We propose an approach to modeling voting for system combination in machine translation.
Our approach combines the advantages of statistical and neural methods since it can not only analyze the relations between hypotheses but also allow for end-to-end training.
arXiv Detail & Related papers (2020-07-14T09:59:38Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.