A bifurcation threshold for contact-induced language change
- URL: http://arxiv.org/abs/2111.12061v1
- Date: Tue, 23 Nov 2021 18:21:12 GMT
- Title: A bifurcation threshold for contact-induced language change
- Authors: Henri Kauhanen
- Abstract summary: This paper proposes a mathematical model of such situations based on reinforcement learning and nonlinear dynamics.
The model is evaluated with the help of two case studies, morphological levelling in Afrikaans and the erosion of null subjects in Afro-Peruvian Spanish.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One proposed mechanism of language change concerns the role played by
second-language (L2) learners in situations of language contact. If
sufficiently many L2 speakers are present in a speech community in relation to
the number of first-language (L1) speakers, then those features which present a
difficulty in L2 acquisition may be prone to disappearing from the language.
This paper proposes a mathematical model of such contact situations based on
reinforcement learning and nonlinear dynamics. The equilibria of a
deterministic reduction of a full stochastic model, describing a mixed
population of L1 and L2 speakers, are fully characterized. Whether or not the
language changes in response to the introduction of L2 learners turns out to
depend on three factors: the overall proportion of L2 learners in the
population, the relative advantages of the linguistic variants in question, and
the strength of the difficulty speakers face in acquiring the language as an
L2. These factors are related by a mathematical formula describing a phase
transition from retention of the L2-difficult feature to its loss from both
speaker populations. This supplies predictions that can be tested against
empirical data. Here, the model is evaluated with the help of two case studies,
morphological levelling in Afrikaans and the erosion of null subjects in
Afro-Peruvian Spanish; the model is found to be broadly in agreement with the
historical development in both cases.
Related papers
- Learning to Write Rationally: How Information Is Distributed in Non-Native Speakers' Essays [1.5039745292757671]
We compare essays written by second language learners with various native language (L1) backgrounds to investigate how they distribute information in their non-native language (L2) production.
Analyses of surprisal and constancy of entropy rate indicated that writers with higher L2 proficiency can reduce the expected uncertainty of language production while still conveying informative content.
arXiv Detail & Related papers (2024-11-05T23:09:37Z) - Language Proficiency and F0 Entrainment: A Study of L2 English Imitation in Italian, French, and Slovak Speakers [48.3822861675732]
This study explores F0 entrainment in second language (L2) English speech imitation during an Alternating Reading Task (ART)
participants with Italian, French, and Slovak native languages imitated English utterances.
Results indicate a nuanced relationship between L2 English proficiency and entrainment.
arXiv Detail & Related papers (2024-04-16T10:10:19Z) - Language Representation Projection: Can We Transfer Factual Knowledge
across Languages in Multilingual Language Models? [48.88328580373103]
We propose two parameter-free $textbfL$anguage $textbfR$epresentation $textbfP$rojection modules (LRP2)
The first module converts non-English representations into English-like equivalents, while the second module reverts English-like representations back into representations of the corresponding non-English language.
Experimental results on the mLAMA dataset demonstrate that LRP2 significantly improves factual knowledge retrieval accuracy and facilitates knowledge transferability across diverse non-English languages.
arXiv Detail & Related papers (2023-11-07T08:16:16Z) - Quantifying the Dialect Gap and its Correlates Across Languages [69.18461982439031]
This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
arXiv Detail & Related papers (2023-10-23T17:42:01Z) - SLABERT Talk Pretty One Day: Modeling Second Language Acquisition with
BERT [0.0]
Cross-linguistic transfer is the influence of linguistic structure of a speaker's native language on the successful acquisition of a foreign language.
We find that NLP literature has not given enough attention to the phenomenon of negative transfer.
Our findings call for further research using our novel Transformer-based SLA models.
arXiv Detail & Related papers (2023-05-31T06:22:07Z) - Mitigating Data Imbalance and Representation Degeneration in
Multilingual Machine Translation [103.90963418039473]
Bi-ACL is a framework that uses only target-side monolingual data and a bilingual dictionary to improve the performance of the MNMT model.
We show that Bi-ACL is more effective both in long-tail languages and in high-resource languages.
arXiv Detail & Related papers (2023-05-22T07:31:08Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z) - Adapt-and-Adjust: Overcoming the Long-Tail Problem of Multilingual
Speech Recognition [58.849768879796905]
We propose Adapt-and-Adjust (A2), a transformer-based multi-task learning framework for end-to-end multilingual speech recognition.
The A2 framework overcomes the long-tail problem via three techniques: (1) exploiting a pretrained multilingual language model (mBERT) to improve the performance of low-resource languages; (2) proposing dual adapters consisting of both language-specific and language-agnostic adaptation with minimal additional parameters; and (3) overcoming the class imbalance, either by imposing class priors in the loss during training or adjusting the logits of the softmax output during inference.
arXiv Detail & Related papers (2020-12-03T03:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.