Probing via Prompting
- URL: http://arxiv.org/abs/2207.01736v1
- Date: Mon, 4 Jul 2022 22:14:40 GMT
- Title: Probing via Prompting
- Authors: Jiaoda Li, Ryan Cotterell, Mrinmaya Sachan
- Abstract summary: This paper introduces a novel model-free approach to probing, by formulating probing as a prompting task.
We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes.
We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on language modeling.
- Score: 71.7904179689271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probing is a popular method to discern what linguistic information is
contained in the representations of pre-trained language models. However, the
mechanism of selecting the probe model has recently been subject to intense
debate, as it is not clear if the probes are merely extracting information or
modeling the linguistic property themselves. To address this challenge, this
paper introduces a novel model-free approach to probing, by formulating probing
as a prompting task. We conduct experiments on five probing tasks and show that
our approach is comparable or better at extracting information than diagnostic
probes while learning much less on its own. We further combine the probing via
prompting approach with attention head pruning to analyze where the model
stores the linguistic information in its architecture. We then examine the
usefulness of a specific linguistic property for pre-training by removing the
heads that are essential to that property and evaluating the resulting model's
performance on language modeling.
Related papers
- Likelihood as a Performance Gauge for Retrieval-Augmented Generation [78.28197013467157]
We show that likelihoods serve as an effective gauge for language model performance.
We propose two methods that use question likelihood as a gauge for selecting and constructing prompts that lead to better performance.
arXiv Detail & Related papers (2024-11-12T13:14:09Z) - Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - Feature Interactions Reveal Linguistic Structure in Language Models [2.0178765779788495]
We study feature interactions in the context of feature attribution methods for post-hoc interpretability.
We work out a grey box methodology, in which we train models to perfection on a formal language classification task.
We show that under specific configurations, some methods are indeed able to uncover the grammatical rules acquired by a model.
arXiv Detail & Related papers (2023-06-21T11:24:41Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.