Neural Polysynthetic Language Modelling
- URL: http://arxiv.org/abs/2005.05477v2
- Date: Wed, 13 May 2020 10:46:29 GMT
- Title: Neural Polysynthetic Language Modelling
- Authors: Lane Schwartz, Francis Tyers, Lori Levin, Christo Kirov, Patrick
Littell, Chi-kiu Lo, Emily Prud'hommeaux, Hyunji Hayley Park, Kenneth
Steimel, Rebecca Knowles, Jeffrey Micher, Lonny Strunk, Han Liu, Coleman
Haley, Katherine J. Zhang, Robbie Jimmerson, Vasilisa Andriyanets, Aldrian
Obaja Muis, Naoki Otani, Jong Hyuk Park, and Zhisong Zhang
- Abstract summary: In high-resource languages, a common approach is to treat morphologically-distinct variants of a common root as completely independent word types.
This assumes, that there are limited inflections per root, and that the majority will appear in a large enough corpus.
We examine the current state-of-the-art in language modelling, machine translation, and text prediction for four polysynthetic languages.
- Score: 15.257624461339867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research in natural language processing commonly assumes that approaches that
work well for English and and other widely-used languages are "language
agnostic". In high-resource languages, especially those that are analytic, a
common approach is to treat morphologically-distinct variants of a common root
as completely independent word types. This assumes, that there are limited
morphological inflections per root, and that the majority will appear in a
large enough corpus, so that the model can adequately learn statistics about
each form. Approaches like stemming, lemmatization, or subword segmentation are
often used when either of those assumptions do not hold, particularly in the
case of synthetic languages like Spanish or Russian that have more inflection
than English.
In the literature, languages like Finnish or Turkish are held up as extreme
examples of complexity that challenge common modelling assumptions. Yet, when
considering all of the world's languages, Finnish and Turkish are closer to the
average case. When we consider polysynthetic languages (those at the extreme of
morphological complexity), approaches like stemming, lemmatization, or subword
modelling may not suffice. These languages have very high numbers of hapax
legomena, showing the need for appropriate morphological handling of words,
without which it is not possible for a model to capture enough word statistics.
We examine the current state-of-the-art in language modelling, machine
translation, and text prediction for four polysynthetic languages: Guaran\'i,
St. Lawrence Island Yupik, Central Alaskan Yupik, and Inuktitut. We then
propose a novel framework for language modelling that combines knowledge
representations from finite-state morphological analyzers with Tensor Product
Representations in order to enable neural language models capable of handling
the full range of typologically variant languages.
Related papers
- The Less the Merrier? Investigating Language Representation in
Multilingual Models [8.632506864465501]
We investigate the linguistic representation of different languages in multilingual models.
We observe from our experiments that community-centered models perform better at distinguishing between languages in the same family for low-resource languages.
arXiv Detail & Related papers (2023-10-20T02:26:34Z) - Language Embeddings Sometimes Contain Typological Generalizations [0.0]
We train neural models for a range of natural language processing tasks on a massively multilingual dataset of Bible translations in 1295 languages.
The learned language representations are then compared to existing typological databases as well as to a novel set of quantitative syntactic and morphological features.
We conclude that some generalizations are surprisingly close to traditional features from linguistic typology, but that most models, as well as those of previous work, do not appear to have made linguistically meaningful generalizations.
arXiv Detail & Related papers (2023-01-19T15:09:59Z) - Universal and Independent: Multilingual Probing Framework for Exhaustive
Model Interpretation and Evaluation [0.04199844472131922]
We present and apply the GUI-assisted framework allowing us to easily probe a massive number of languages.
Most of the regularities revealed in the mBERT model are typical for the western-European languages.
Our framework can be integrated with the existing probing toolboxes, model cards, and leaderboards.
arXiv Detail & Related papers (2022-10-24T13:41:17Z) - Same Neurons, Different Languages: Probing Morphosyntax in Multilingual
Pre-trained Models [84.86942006830772]
We conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar.
We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe.
arXiv Detail & Related papers (2022-05-04T12:22:31Z) - Discovering Representation Sprachbund For Multilingual Pre-Training [139.05668687865688]
We generate language representation from multilingual pre-trained models and conduct linguistic analysis.
We cluster all the target languages into multiple groups and name each group as a representation sprachbund.
Experiments are conducted on cross-lingual benchmarks and significant improvements are achieved compared to strong baselines.
arXiv Detail & Related papers (2021-09-01T09:32:06Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Linguistic Typology Features from Text: Inferring the Sparse Features of
World Atlas of Language Structures [73.06435180872293]
We construct a recurrent neural network predictor based on byte embeddings and convolutional layers.
We show that some features from various linguistic types can be predicted reliably.
arXiv Detail & Related papers (2020-04-30T21:00:53Z) - Do Neural Language Models Show Preferences for Syntactic Formalisms? [14.388237635684737]
We study the extent to which the semblance of syntactic structure captured by language models adheres to a surface-syntactic or deep syntactic style of analysis.
We apply a probe for extracting directed dependency trees to BERT and ELMo models trained on 13 different languages.
We find that both models exhibit a preference for UD over SUD - with interesting variations across languages and layers.
arXiv Detail & Related papers (2020-04-29T11:37:53Z) - Limits of Detecting Text Generated by Large-Scale Language Models [65.46403462928319]
Some consider large-scale language models that can generate long and coherent pieces of text as dangerous, since they may be used in misinformation campaigns.
Here we formulate large-scale language model output detection as a hypothesis testing problem to classify text as genuine or generated.
arXiv Detail & Related papers (2020-02-09T19:53:23Z) - An Empirical Study of Factors Affecting Language-Independent Models [11.976665726887733]
We show that language-independent models can be comparable to or even outperforms the models trained using monolingual data.
We experiment language-independent models with many different languages and show that they are more suitable for typologically similar languages.
arXiv Detail & Related papers (2019-12-30T22:41:57Z) - A Simple Joint Model for Improved Contextual Neural Lemmatization [60.802451210656805]
We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages.
Our paper describes the model in addition to training and decoding procedures.
arXiv Detail & Related papers (2019-04-04T02:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.