Crawling the Internal Knowledge-Base of Language Models
- URL: http://arxiv.org/abs/2301.12810v1
- Date: Mon, 30 Jan 2023 12:03:36 GMT
- Title: Crawling the Internal Knowledge-Base of Language Models
- Authors: Roi Cohen, Mor Geva, Jonathan Berant, Amir Globerson
- Abstract summary: We describe a procedure for crawling'' the internal knowledge-base of a language model.
We evaluate our approach on graphs crawled starting from dozens of seed entities.
- Score: 53.95793060766248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language models are trained on large volumes of text, and as a result their
parameters might contain a significant body of factual knowledge. Any
downstream task performed by these models implicitly builds on these facts, and
thus it is highly desirable to have means for representing this body of
knowledge in an interpretable way. However, there is currently no mechanism for
such a representation. Here, we propose to address this goal by extracting a
knowledge-graph of facts from a given language model. We describe a procedure
for ``crawling'' the internal knowledge-base of a language model. Specifically,
given a seed entity, we expand a knowledge-graph around it. The crawling
procedure is decomposed into sub-tasks, realized through specially designed
prompts that control for both precision (i.e., that no wrong facts are
generated) and recall (i.e., the number of facts generated). We evaluate our
approach on graphs crawled starting from dozens of seed entities, and show it
yields high precision graphs (82-92%), while emitting a reasonable number of
facts per entity.
Related papers
- Transparency at the Source: Evaluating and Interpreting Language Models
With Access to the True Distribution [4.01799362940916]
We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data.
The data is generated using a massive probabilistic grammar, that is itself derived from a large natural language corpus.
With access to the underlying true source, our results show striking differences and outcomes in learning dynamics between different classes of words.
arXiv Detail & Related papers (2023-10-23T12:03:01Z) - Knowledge Graph Guided Semantic Evaluation of Language Models For User
Trust [7.063958622970576]
This study evaluates the encoded semantics in the self-attention transformers by leveraging explicit knowledge graph structures.
The opacity of language models has an immense bearing on societal issues of trust and explainable decision outcomes.
arXiv Detail & Related papers (2023-05-08T18:53:14Z) - Discovering Latent Knowledge in Language Models Without Supervision [72.95136739040676]
Existing techniques for training language models can be misaligned with the truth.
We propose directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way.
We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models.
arXiv Detail & Related papers (2022-12-07T18:17:56Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z) - Do Language Embeddings Capture Scales? [54.1633257459927]
We show that pretrained language models capture a significant amount of information about the scalar magnitudes of objects.
We identify contextual information in pre-training and numeracy as two key factors affecting their performance.
arXiv Detail & Related papers (2020-10-11T21:11:09Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.