Cognate Transformer for Automated Phonological Reconstruction and
Cognate Reflex Prediction
- URL: http://arxiv.org/abs/2310.07487v2
- Date: Wed, 18 Oct 2023 06:48:37 GMT
- Title: Cognate Transformer for Automated Phonological Reconstruction and
Cognate Reflex Prediction
- Authors: V.S.D.S.Mahesh Akavarapu and Arnab Bhattacharya
- Abstract summary: We adapt MSA Transformer, a protein language model, to the problem of automated phonological reconstruction.
MSA Transformer trains on multiple sequence alignments as input and is, thus, apt for application on aligned cognate words.
We also apply the model on another associated task, namely, cognate reflex prediction, where a reflex word in a daughter language is predicted based on cognate words from other daughter languages.
- Score: 4.609569810881602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Phonological reconstruction is one of the central problems in historical
linguistics where a proto-word of an ancestral language is determined from the
observed cognate words of daughter languages. Computational approaches to
historical linguistics attempt to automate the task by learning models on
available linguistic data. Several ideas and techniques drawn from
computational biology have been successfully applied in the area of
computational historical linguistics. Following these lines, we adapt MSA
Transformer, a protein language model, to the problem of automated phonological
reconstruction. MSA Transformer trains on multiple sequence alignments as input
and is, thus, apt for application on aligned cognate words. We, hence, name our
model as Cognate Transformer. We also apply the model on another associated
task, namely, cognate reflex prediction, where a reflex word in a daughter
language is predicted based on cognate words from other daughter languages. We
show that our model outperforms the existing models on both tasks, especially
when it is pre-trained on masked word prediction task.
Related papers
- Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Automated Cognate Detection as a Supervised Link Prediction Task with
Cognate Transformer [4.609569810881602]
Identification of cognates across related languages is one of the primary problems in historical linguistics.
We present a transformer-based architecture inspired by computational biology for the task of automated cognate detection.
arXiv Detail & Related papers (2024-02-05T11:47:36Z) - Neural Unsupervised Reconstruction of Protolanguage Word Forms [34.66200889614538]
We present a state-of-the-art neural approach to the unsupervised reconstruction of ancient word forms.
We extend this work with neural models that can capture more complicated phonological and morphological changes.
arXiv Detail & Related papers (2022-11-16T05:38:51Z) - Is neural language acquisition similar to natural? A chronological
probing study [0.0515648410037406]
We present the chronological probing study of transformer English models such as MultiBERT and T5.
We compare the information about the language learned by the models in the process of training on corpora.
The results show that 1) linguistic information is acquired in the early stages of training 2) both language models demonstrate capabilities to capture various features from various levels of language.
arXiv Detail & Related papers (2022-07-01T17:24:11Z) - Modeling Target-Side Morphology in Neural Machine Translation: A
Comparison of Strategies [72.56158036639707]
Morphologically rich languages pose difficulties to machine translation.
A large amount of differently inflected word surface forms entails a larger vocabulary.
Some inflected forms of infrequent terms typically do not appear in the training corpus.
Linguistic agreement requires the system to correctly match the grammatical categories between inflected word forms in the output sentence.
arXiv Detail & Related papers (2022-03-25T10:13:20Z) - Utilizing Wordnets for Cognate Detection among Indian Languages [50.83320088758705]
We detect cognate word pairs among ten Indian languages with Hindi.
We use deep learning methodologies to predict whether a word pair is cognate or not.
We report improved performance of up to 26%.
arXiv Detail & Related papers (2021-12-30T16:46:28Z) - Generative latent neural models for automatic word alignment [0.0]
Variational autoencoders have been recently used in various of natural language processing to learn in an unsupervised way latent representations that are useful for language generation tasks.
In this paper, we study these models for the task of word alignment and propose and assess several evolutions of a vanilla variational autoencoders.
We demonstrate that these techniques can yield competitive results as compared to Giza++ and to a strong neural network alignment system for two language pairs.
arXiv Detail & Related papers (2020-09-28T07:54:09Z) - Grounded Compositional Outputs for Adaptive Language Modeling [59.02706635250856]
A language model's vocabulary$-$typically selected before training and permanently fixed later$-$affects its size.
We propose a fully compositional output embedding layer for language models.
To our knowledge, the result is the first word-level language model with a size that does not depend on the training vocabulary.
arXiv Detail & Related papers (2020-09-24T07:21:14Z) - Constructing a Family Tree of Ten Indo-European Languages with
Delexicalized Cross-linguistic Transfer Patterns [57.86480614673034]
We formalize the delexicalized transfer as interpretable tree-to-string and tree-to-tree patterns.
This allows us to quantitatively probe cross-linguistic transfer and extend inquiries of Second Language Acquisition.
arXiv Detail & Related papers (2020-07-17T15:56:54Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.