Do Neural Language Models Show Preferences for Syntactic Formalisms?
- URL: http://arxiv.org/abs/2004.14096v1
- Date: Wed, 29 Apr 2020 11:37:53 GMT
- Title: Do Neural Language Models Show Preferences for Syntactic Formalisms?
- Authors: Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, Joakim Nivre
- Abstract summary: We study the extent to which the semblance of syntactic structure captured by language models adheres to a surface-syntactic or deep syntactic style of analysis.
We apply a probe for extracting directed dependency trees to BERT and ELMo models trained on 13 different languages.
We find that both models exhibit a preference for UD over SUD - with interesting variations across languages and layers.
- Score: 14.388237635684737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work on the interpretability of deep neural language models has
concluded that many properties of natural language syntax are encoded in their
representational spaces. However, such studies often suffer from limited scope
by focusing on a single language and a single linguistic formalism. In this
study, we aim to investigate the extent to which the semblance of syntactic
structure captured by language models adheres to a surface-syntactic or deep
syntactic style of analysis, and whether the patterns are consistent across
different languages. We apply a probe for extracting directed dependency trees
to BERT and ELMo models trained on 13 different languages, probing for two
different syntactic annotation styles: Universal Dependencies (UD),
prioritizing deep syntactic relations, and Surface-Syntactic Universal
Dependencies (SUD), focusing on surface structure. We find that both models
exhibit a preference for UD over SUD - with interesting variations across
languages and layers - and that the strength of this preference is correlated
with differences in tree shape.
Related papers
- Analyzing The Language of Visual Tokens [48.62180485759458]
We take a natural-language-centric approach to analyzing discrete visual languages.
We show that higher token innovation drives greater entropy and lower compression, with tokens predominantly representing object parts.
We also show that visual languages lack cohesive grammatical structures, leading to higher perplexity and weaker hierarchical organization compared to natural languages.
arXiv Detail & Related papers (2024-11-07T18:59:28Z) - Language Embeddings Sometimes Contain Typological Generalizations [0.0]
We train neural models for a range of natural language processing tasks on a massively multilingual dataset of Bible translations in 1295 languages.
The learned language representations are then compared to existing typological databases as well as to a novel set of quantitative syntactic and morphological features.
We conclude that some generalizations are surprisingly close to traditional features from linguistic typology, but that most models, as well as those of previous work, do not appear to have made linguistically meaningful generalizations.
arXiv Detail & Related papers (2023-01-19T15:09:59Z) - Causal Analysis of Syntactic Agreement Neurons in Multilingual Language
Models [28.036233760742125]
We causally probe multilingual language models (XGLM and multilingual BERT) across various languages.
We find significant neuron overlap across languages in autoregressive multilingual language models, but not masked language models.
arXiv Detail & Related papers (2022-10-25T20:43:36Z) - Universal and Independent: Multilingual Probing Framework for Exhaustive
Model Interpretation and Evaluation [0.04199844472131922]
We present and apply the GUI-assisted framework allowing us to easily probe a massive number of languages.
Most of the regularities revealed in the mBERT model are typical for the western-European languages.
Our framework can be integrated with the existing probing toolboxes, model cards, and leaderboards.
arXiv Detail & Related papers (2022-10-24T13:41:17Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Integrating Linguistic Theory and Neural Language Models [2.870517198186329]
I present several case studies to illustrate how theoretical linguistics and neural language models are still relevant to each other.
This thesis contributes three studies that explore different aspects of the syntax-semantics interface in language models.
arXiv Detail & Related papers (2022-07-20T04:20:46Z) - Discovering Representation Sprachbund For Multilingual Pre-Training [139.05668687865688]
We generate language representation from multilingual pre-trained models and conduct linguistic analysis.
We cluster all the target languages into multiple groups and name each group as a representation sprachbund.
Experiments are conducted on cross-lingual benchmarks and significant improvements are achieved compared to strong baselines.
arXiv Detail & Related papers (2021-09-01T09:32:06Z) - Joint Universal Syntactic and Semantic Parsing [39.39769254704693]
We exploit the rich syntactic and semantic annotations contained in the Universal Decompositional Semantics dataset.
We analyze the behaviour of a joint model of syntax and semantics, finding patterns supported by linguistic theory.
We then investigate to what degree joint modeling generalizes to a multilingual setting, where we find similar trends across 8 languages.
arXiv Detail & Related papers (2021-04-12T17:56:34Z) - Constructing a Family Tree of Ten Indo-European Languages with
Delexicalized Cross-linguistic Transfer Patterns [57.86480614673034]
We formalize the delexicalized transfer as interpretable tree-to-string and tree-to-tree patterns.
This allows us to quantitatively probe cross-linguistic transfer and extend inquiries of Second Language Acquisition.
arXiv Detail & Related papers (2020-07-17T15:56:54Z) - Linguistic Typology Features from Text: Inferring the Sparse Features of
World Atlas of Language Structures [73.06435180872293]
We construct a recurrent neural network predictor based on byte embeddings and convolutional layers.
We show that some features from various linguistic types can be predicted reliably.
arXiv Detail & Related papers (2020-04-30T21:00:53Z) - Evaluating Transformer-Based Multilingual Text Classification [55.53547556060537]
We argue that NLP tools perform unequally across languages with different syntactic and morphological structures.
We calculate word order and morphological similarity indices to aid our empirical study.
arXiv Detail & Related papers (2020-04-29T03:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.