Probing Across Time: What Does RoBERTa Know and When?
- URL: http://arxiv.org/abs/2104.07885v1
- Date: Fri, 16 Apr 2021 04:26:39 GMT
- Title: Probing Across Time: What Does RoBERTa Know and When?
- Authors: Leo Z. Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, Noah A.
Smith
- Abstract summary: We show that linguistic knowledge is acquired fast, stably, and robustly across domains. Facts and commonsense are slower and more domain-sensitive.
We believe that probing-across-time analyses can help researchers understand the complex, intermingled learning that these models undergo and guide us toward more efficient approaches that accomplish necessary learning faster.
- Score: 70.20775905353794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Models of language trained on very large corpora have been demonstrated
useful for NLP. As fixed artifacts, they have become the object of intense
study, with many researchers "probing" the extent to which linguistic
abstractions, factual and commonsense knowledge, and reasoning abilities they
acquire and readily demonstrate. Building on this line of work, we consider a
new question: for types of knowledge a language model learns, when during
(pre)training are they acquired? We plot probing performance across iterations,
using RoBERTa as a case study. Among our findings: linguistic knowledge is
acquired fast, stably, and robustly across domains. Facts and commonsense are
slower and more domain-sensitive. Reasoning abilities are, in general, not
stably acquired. As new datasets, pretraining protocols, and probes emerge, we
believe that probing-across-time analyses can help researchers understand the
complex, intermingled learning that these models undergo and guide us toward
more efficient approaches that accomplish necessary learning faster.
Related papers
- Subspace Chronicles: How Linguistic Information Emerges, Shifts and
Interacts during Language Model Training [56.74440457571821]
We analyze tasks covering syntax, semantics and reasoning, across 2M pre-training steps and five seeds.
We identify critical learning phases across tasks and time, during which subspaces emerge, share information, and later disentangle to specialize.
Our findings have implications for model interpretability, multi-task learning, and learning from limited data.
arXiv Detail & Related papers (2023-10-25T09:09:55Z) - Large Language Models Can be Lazy Learners: Analyze Shortcuts in
In-Context Learning [28.162661418161466]
Large language models (LLMs) have recently shown great potential for in-context learning.
This paper investigates the reliance of LLMs on shortcuts or spurious correlations within prompts.
We uncover a surprising finding that larger models are more likely to utilize shortcuts in prompts during inference.
arXiv Detail & Related papers (2023-05-26T20:56:30Z) - ALERT: Adapting Language Models to Reasoning Tasks [43.8679673685468]
ALERT is a benchmark and suite of analyses for assessing language models' reasoning ability.
ALERT provides a test bed to asses any language model on fine-grained reasoning skills.
We find that language models learn more reasoning skills during finetuning stage compared to pretraining state.
arXiv Detail & Related papers (2022-12-16T05:15:41Z) - Shortcut Learning of Large Language Models in Natural Language
Understanding [119.45683008451698]
Large language models (LLMs) have achieved state-of-the-art performance on a series of natural language understanding tasks.
They might rely on dataset bias and artifacts as shortcuts for prediction.
This has significantly affected their generalizability and adversarial robustness.
arXiv Detail & Related papers (2022-08-25T03:51:39Z) - Is neural language acquisition similar to natural? A chronological
probing study [0.0515648410037406]
We present the chronological probing study of transformer English models such as MultiBERT and T5.
We compare the information about the language learned by the models in the process of training on corpora.
The results show that 1) linguistic information is acquired in the early stages of training 2) both language models demonstrate capabilities to capture various features from various levels of language.
arXiv Detail & Related papers (2022-07-01T17:24:11Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained
Language Models [103.75890012041366]
Language models (LMs) have proven surprisingly successful at capturing factual knowledge.
However, studies on LMs' factual representation ability have almost invariably been performed on English.
We create a benchmark of cloze-style probes for 23 typologically diverse languages.
arXiv Detail & Related papers (2020-10-13T05:29:56Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.