The optimality of syntactic dependency distances
- URL: http://arxiv.org/abs/2007.15342v4
- Date: Mon, 4 Oct 2021 18:59:21 GMT
- Title: The optimality of syntactic dependency distances
- Authors: Ramon Ferrer-i-Cancho, Carlos G\'omez-Rodr\'iguez, Juan Luis Esteban
and Llu\'is Alemany-Puig
- Abstract summary: We recast the problem of the optimality of the word order of a sentence as an optimization problem on a spatial network.
We introduce a new score to quantify the cognitive pressure to reduce the distance between linked words in a sentence.
The analysis of sentences from 93 languages reveals that half of languages are optimized to a 70% or more.
- Score: 0.802904964931021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is often stated that human languages, as other biological systems, are
shaped by cost-cutting pressures but, to what extent? Attempts to quantify the
degree of optimality of languages by means of an optimality score have been
scarce and focused mostly on English. Here we recast the problem of the
optimality of the word order of a sentence as an optimization problem on a
spatial network where the vertices are words, arcs indicate syntactic
dependencies and the space is defined by the linear order of the words in the
sentence. We introduce a new score to quantify the cognitive pressure to reduce
the distance between linked words in a sentence. The analysis of sentences from
93 languages representing 19 linguistic families reveals that half of languages
are optimized to a 70% or more. The score indicates that distances are not
significantly reduced in a few languages and confirms two theoretical
predictions, i.e. that longer sentences are more optimized and that distances
are more likely to be longer than expected by chance in short sentences. We
present a new hierarchical ranking of languages by their degree of
optimization. The new score has implications for various fields of language
research (dependency linguistics, typology, historical linguistics, clinical
linguistics and cognitive science). Finally, the principles behind the design
of the score have implications for network science.
Related papers
- Syntactic Language Change in English and German: Metrics, Parsers, and Convergences [56.47832275431858]
The current paper looks at diachronic trends in syntactic language change in both English and German, using corpora of parliamentary debates from the last c. 160 years.
We base our observations on five dependencys, including the widely used Stanford Core as well as 4 newer alternatives.
We show that changes in syntactic measures seem to be more frequent at the tails of sentence length distributions.
arXiv Detail & Related papers (2024-02-18T11:46:16Z) - Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer
with Fine-tuning Slow and Fast [50.19681990847589]
Existing research has shown that a multilingual pre-trained language model fine-tuned with one (source) language also performs well on downstream tasks for non-source languages.
This paper analyzes the fine-tuning process, discovers when the performance gap changes and identifies which network weights affect the overall performance most.
arXiv Detail & Related papers (2023-05-19T06:04:21Z) - A bounded rationality account of dependency length minimization in Hindi [0.0]
The principle of DEPENDENCY LENGTH MINIMIZATION is thought to shape the structure of human languages for effective communication.
Preverbally, the placement of long-before-short constituents and postverbally, short-before-long constituents are known to minimize overall dependency length of a sentence.
In this study, we test the hypothesis that placing only the shortest preverbal constituent next to the main-verb explains word order preferences in Hindi.
arXiv Detail & Related papers (2023-04-22T13:53:50Z) - Relationship of the language distance to English ability of a country [0.0]
We introduce a novel solution to measure the semantic dissimilarity between languages.
We empirically examine the effectiveness of the proposed semantic language distance.
The experimental results show that the language distance demonstrates negative influence on a country's average English ability.
arXiv Detail & Related papers (2022-11-15T02:40:00Z) - The optimality of word lengths. Theoretical foundations and an empirical
study [0.7682551949752529]
Zipf's law of abbreviation has been viewed as a manifestation of compression.
We quantify for the first time the degree of optimality of word lengths in languages.
In general, spoken word durations are more optimized than written word lengths in characters.
arXiv Detail & Related papers (2022-08-22T15:03:31Z) - Long-range and hierarchical language predictions in brains and
algorithms [82.81964713263483]
We show that while deep language algorithms are optimized to predict adjacent words, the human brain would be tuned to make long-range and hierarchical predictions.
This study strengthens predictive coding theory and suggests a critical role of long-range and hierarchical predictions in natural language processing.
arXiv Detail & Related papers (2021-11-28T20:26:07Z) - Generalized Optimal Linear Orders [9.010643838773477]
The sequential structure of language, and the order of words in a sentence specifically, plays a central role in human language processing.
In designing computational models of language, the de facto approach is to present sentences to machines with the words ordered in the same order as in the original human-authored sentence.
The very essence of this work is to question the implicit assumption that this is desirable and inject theoretical soundness into the consideration of word order in natural language processing.
arXiv Detail & Related papers (2021-08-13T13:10:15Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - Speakers Fill Lexical Semantic Gaps with Context [65.08205006886591]
We operationalise the lexical ambiguity of a word as the entropy of meanings it can take.
We find significant correlations between our estimate of ambiguity and the number of synonyms a word has in WordNet.
This suggests that, in the presence of ambiguity, speakers compensate by making contexts more informative.
arXiv Detail & Related papers (2020-10-05T17:19:10Z) - Where New Words Are Born: Distributional Semantic Analysis of Neologisms
and Their Semantic Neighborhoods [51.34667808471513]
We investigate the importance of two factors, semantic sparsity and frequency growth rates of semantic neighbors, formalized in the distributional semantics paradigm.
We show that both factors are predictive word emergence although we find more support for the latter hypothesis.
arXiv Detail & Related papers (2020-01-21T19:09:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.