Evolution of grammatical forms: some quantitative approaches
- URL: http://arxiv.org/abs/2302.02655v1
- Date: Mon, 6 Feb 2023 09:50:48 GMT
- Title: Evolution of grammatical forms: some quantitative approaches
- Authors: Jean-Marc Luck and Anita Mehta
- Abstract summary: Grammatical forms are said to evolve via two main mechanisms.
These are the descent' mechanism and the contact' mechanism.
We use ideas and concepts from statistical physics to formulate a series of static and dynamical models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Grammatical forms are said to evolve via two main mechanisms. These are,
respectively, the `descent' mechanism, where current forms can be seen to have
descended (albeit with occasional modifications) from their roots in ancient
languages, and the `contact' mechanism, where evolution in a given language
occurs via borrowing from other languages with which it is in contact. We use
ideas and concepts from statistical physics to formulate a series of static and
dynamical models which illustrate these issues in general terms. The static
models emphasise the relative numbers of rules and exceptions, while the
dynamical models focus on the emergence of exceptional forms. These unlikely
survivors among various competing grammatical forms are winners against the
odds. Our analysis suggests that they emerge when the influence of neighbouring
languages exceeds the generic tendency towards regularisation within individual
languages.
Related papers
- Analyzing The Language of Visual Tokens [48.62180485759458]
We take a natural-language-centric approach to analyzing discrete visual languages.
We show that higher token innovation drives greater entropy and lower compression, with tokens predominantly representing object parts.
We also show that visual languages lack cohesive grammatical structures, leading to higher perplexity and weaker hierarchical organization compared to natural languages.
arXiv Detail & Related papers (2024-11-07T18:59:28Z) - Geometry of Language [0.0]
We present a fresh perspective on language, combining ideas from various sources, but mixed in a new synthesis.
The question is whether we can formulate an elegant formalism, a universal grammar or a mechanism which explains significant aspects of the human faculty of language.
We describe such a mechanism, which differs from existing logical and grammatical approaches by its geometric nature.
arXiv Detail & Related papers (2023-03-09T12:22:28Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Same Neurons, Different Languages: Probing Morphosyntax in Multilingual
Pre-trained Models [84.86942006830772]
We conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar.
We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe.
arXiv Detail & Related papers (2022-05-04T12:22:31Z) - Oracle Linguistic Graphs Complement a Pretrained Transformer Language
Model: A Cross-formalism Comparison [13.31232311913236]
We examine the extent to which, in principle, linguistic graph representations can complement and improve neural language modeling.
We find that, overall, semantic constituency structures are most useful to language modeling performance.
arXiv Detail & Related papers (2021-12-15T04:29:02Z) - Language Model Evaluation Beyond Perplexity [47.268323020210175]
We analyze whether text generated from language models exhibits the statistical tendencies present in the human-generated text on which they were trained.
We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions.
arXiv Detail & Related papers (2021-05-31T20:13:44Z) - How individuals change language [1.2437226707039446]
We introduce a very general mathematical model that encompasses a wide variety of individual-level linguistic behaviours.
We compare the likelihood of empirically-attested changes in definite and indefinite articles in multiple languages under different assumptions.
We find that accounts of language change that appeal primarily to errors in childhood language acquisition are very weakly supported by the historical data.
arXiv Detail & Related papers (2021-04-20T19:02:49Z) - Investigating Cross-Linguistic Adjective Ordering Tendencies with a
Latent-Variable Model [66.84264870118723]
We present the first purely corpus-driven model of multi-lingual adjective ordering in the form of a latent-variable model.
We provide strong converging evidence for the existence of universal, cross-linguistic, hierarchical adjective ordering tendencies.
arXiv Detail & Related papers (2020-10-09T18:27:55Z) - On the coexistence of competing languages [0.0]
We revisit the question of language competition, with an emphasis on uncovering the ways in which coexistence might emerge.
We find that this emergence is related to symmetry breaking, and explore two particular scenarios.
For each of these, the investigation of paradigmatic situations leads us to a quantitative understanding of the conditions leading to language coexistence.
arXiv Detail & Related papers (2020-03-10T14:06:55Z) - A Simple Joint Model for Improved Contextual Neural Lemmatization [60.802451210656805]
We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages.
Our paper describes the model in addition to training and decoding procedures.
arXiv Detail & Related papers (2019-04-04T02:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.