Is neural language acquisition similar to natural? A chronological
probing study
- URL: http://arxiv.org/abs/2207.00560v1
- Date: Fri, 1 Jul 2022 17:24:11 GMT
- Title: Is neural language acquisition similar to natural? A chronological
probing study
- Authors: Ekaterina Voloshina, Oleg Serikov, Tatiana Shavrina
- Abstract summary: We present the chronological probing study of transformer English models such as MultiBERT and T5.
We compare the information about the language learned by the models in the process of training on corpora.
The results show that 1) linguistic information is acquired in the early stages of training 2) both language models demonstrate capabilities to capture various features from various levels of language.
- Score: 0.0515648410037406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The probing methodology allows one to obtain a partial representation of
linguistic phenomena stored in the inner layers of the neural network, using
external classifiers and statistical analysis. Pre-trained transformer-based
language models are widely used both for natural language understanding (NLU)
and natural language generation (NLG) tasks making them most commonly used for
downstream applications. However, little analysis was carried out, whether the
models were pre-trained enough or contained knowledge correlated with
linguistic theory. We are presenting the chronological probing study of
transformer English models such as MultiBERT and T5. We sequentially compare
the information about the language learned by the models in the process of
training on corpora. The results show that 1) linguistic information is
acquired in the early stages of training 2) both language models demonstrate
capabilities to capture various features from various levels of language,
including morphology, syntax, and even discourse, while they also can
inconsistently fail on tasks that are perceived as easy. We also introduce the
open-source framework for chronological probing research, compatible with other
transformer-based models.
https://github.com/EkaterinaVoloshina/chronological_probing
Related papers
- Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - Transparency at the Source: Evaluating and Interpreting Language Models
With Access to the True Distribution [4.01799362940916]
We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data.
The data is generated using a massive probabilistic grammar, that is itself derived from a large natural language corpus.
With access to the underlying true source, our results show striking differences and outcomes in learning dynamics between different classes of words.
arXiv Detail & Related papers (2023-10-23T12:03:01Z) - Integrating Linguistic Theory and Neural Language Models [2.870517198186329]
I present several case studies to illustrate how theoretical linguistics and neural language models are still relevant to each other.
This thesis contributes three studies that explore different aspects of the syntax-semantics interface in language models.
arXiv Detail & Related papers (2022-07-20T04:20:46Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Examining the Inductive Bias of Neural Language Models with Artificial
Languages [42.699545862522214]
We propose a novel method for investigating the inductive biases of language models using artificial languages.
This constitutes a fully controlled causal framework, and demonstrates how grammar engineering can serve as a useful tool for analyzing neural models.
arXiv Detail & Related papers (2021-06-02T09:34:32Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z) - Learning Music Helps You Read: Using Transfer to Study Linguistic
Structure in Language Models [27.91397366776451]
Training LSTMs on latent structure (MIDI music or Java code) improves test performance on natural language.
Experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological similarity to the training language.
arXiv Detail & Related papers (2020-04-30T06:24:03Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z) - A Simple Joint Model for Improved Contextual Neural Lemmatization [60.802451210656805]
We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages.
Our paper describes the model in addition to training and decoding procedures.
arXiv Detail & Related papers (2019-04-04T02:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.