Cross-Lingual Transfer of Cognitive Processing Complexity
- URL: http://arxiv.org/abs/2302.12695v2
- Date: Mon, 27 Feb 2023 10:58:12 GMT
- Title: Cross-Lingual Transfer of Cognitive Processing Complexity
- Authors: Charlotte Pouw, Nora Hollenstein, Lisa Beinborn
- Abstract summary: We use sentence-level eye-tracking patterns as a cognitive indicator for structural complexity.
We show that the multilingual model XLM-RoBERTa can successfully predict varied patterns for 13 typologically diverse languages.
- Score: 11.939409227407769
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When humans read a text, their eye movements are influenced by the structural
complexity of the input sentences. This cognitive phenomenon holds across
languages and recent studies indicate that multilingual language models utilize
structural similarities between languages to facilitate cross-lingual transfer.
We use sentence-level eye-tracking patterns as a cognitive indicator for
structural complexity and show that the multilingual model XLM-RoBERTa can
successfully predict varied patterns for 13 typologically diverse languages,
despite being fine-tuned only on English data. We quantify the sensitivity of
the model to structural complexity and distinguish a range of complexity
characteristics. Our results indicate that the model develops a meaningful bias
towards sentence length but also integrates cross-lingual differences. We
conduct a control experiment with randomized word order and find that the model
seems to additionally capture more complex structural information.
Related papers
- Analyzing The Language of Visual Tokens [48.62180485759458]
We take a natural-language-centric approach to analyzing discrete visual languages.
We show that higher token innovation drives greater entropy and lower compression, with tokens predominantly representing object parts.
We also show that visual languages lack cohesive grammatical structures, leading to higher perplexity and weaker hierarchical organization compared to natural languages.
arXiv Detail & Related papers (2024-11-07T18:59:28Z) - Evaluating Morphological Compositional Generalization in Large Language Models [17.507983593566223]
We investigate the morphological generalization abilities of large language models (LLMs) through the lens of compositionality.
We focus on agglutinative languages such as Turkish and Finnish.
Our analysis shows that LLMs struggle with morphological compositional generalization particularly when applied to novel word roots.
While models can identify individual morphological combinations better than chance, their performance lacks systematicity, leading to significant accuracy gaps compared to humans.
arXiv Detail & Related papers (2024-10-16T15:17:20Z) - Exploring syntactic information in sentence embeddings through multilingual subject-verb agreement [1.4335183427838039]
We take the approach of developing curated synthetic data on a large scale, with specific properties.
We use a new multiple-choice task and datasets, Blackbird Language Matrices, to focus on a specific grammatical structural phenomenon.
We show that despite having been trained on multilingual texts in a consistent manner, multilingual pretrained language models have language-specific differences.
arXiv Detail & Related papers (2024-09-10T14:58:55Z) - MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating
Chinese and English Computational Language Models [44.74364661212373]
This paper proposes MulCogBench, a cognitive benchmark dataset collected from native Chinese and English participants.
It encompasses a variety of cognitive data, including subjective semantic ratings, eye-tracking, functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG)
Results show that language models share significant similarities with human cognitive data and the similarity patterns are modulated by the data modality and stimuli complexity.
arXiv Detail & Related papers (2024-03-02T07:49:57Z) - Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of
Multilingual Language Models [73.11488464916668]
This study investigates the dynamics of the multilingual pretraining process.
We probe checkpoints taken from throughout XLM-R pretraining, using a suite of linguistic tasks.
Our analysis shows that the model achieves high in-language performance early on, with lower-level linguistic skills acquired before more complex ones.
arXiv Detail & Related papers (2022-05-24T03:35:00Z) - A Massively Multilingual Analysis of Cross-linguality in Shared
Embedding Space [61.18554842370824]
In cross-lingual language models, representations for many different languages live in the same space.
We compute a task-based measure of cross-lingual alignment in the form of bitext retrieval performance.
We examine a range of linguistic, quasi-linguistic, and training-related features as potential predictors of these alignment metrics.
arXiv Detail & Related papers (2021-09-13T21:05:37Z) - Linguistic Typology Features from Text: Inferring the Sparse Features of
World Atlas of Language Structures [73.06435180872293]
We construct a recurrent neural network predictor based on byte embeddings and convolutional layers.
We show that some features from various linguistic types can be predicted reliably.
arXiv Detail & Related papers (2020-04-30T21:00:53Z) - Bridging Linguistic Typology and Multilingual Machine Translation with
Multi-View Language Representations [83.27475281544868]
We use singular vector canonical correlation analysis to study what kind of information is induced from each source.
We observe that our representations embed typology and strengthen correlations with language relationships.
We then take advantage of our multi-view language vector space for multilingual machine translation, where we achieve competitive overall translation accuracy.
arXiv Detail & Related papers (2020-04-30T16:25:39Z) - Learning Music Helps You Read: Using Transfer to Study Linguistic
Structure in Language Models [27.91397366776451]
Training LSTMs on latent structure (MIDI music or Java code) improves test performance on natural language.
Experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological similarity to the training language.
arXiv Detail & Related papers (2020-04-30T06:24:03Z) - Evaluating Transformer-Based Multilingual Text Classification [55.53547556060537]
We argue that NLP tools perform unequally across languages with different syntactic and morphological structures.
We calculate word order and morphological similarity indices to aid our empirical study.
arXiv Detail & Related papers (2020-04-29T03:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.