Personas as a Way to Model Truthfulness in Language Models
- URL: http://arxiv.org/abs/2310.18168v5
- Date: Tue, 6 Feb 2024 09:04:04 GMT
- Title: Personas as a Way to Model Truthfulness in Language Models
- Authors: Nitish Joshi, Javier Rando, Abulhair Saparov, Najoung Kim, He He
- Abstract summary: Large language models (LLMs) are trained on vast amounts of text from the internet.
This paper presents an explanation for why LMs appear to know the truth despite not being trained with truth labels.
- Score: 23.86655844340011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are trained on vast amounts of text from the
internet, which contains both factual and misleading information about the
world. While unintuitive from a classic view of LMs, recent work has shown that
the truth value of a statement can be elicited from the model's
representations. This paper presents an explanation for why LMs appear to know
the truth despite not being trained with truth labels. We hypothesize that the
pretraining data is generated by groups of (un)truthful agents whose outputs
share common features, and they form a (un)truthful persona. By training on
this data, LMs can infer and represent the persona in its activation space.
This allows the model to separate truth from falsehoods and controls the
truthfulness of its generation. We show evidence for the persona hypothesis via
two observations: (1) we can probe whether a model's answer will be truthful
before it is generated; (2) finetuning a model on a set of facts improves its
truthfulness on unseen topics. Next, using arithmetics as a synthetic
environment, we show that structures of the pretraining data are crucial for
the model to infer the truthful persona. Overall, our findings suggest that
models can exploit hierarchical structures in the data to learn abstract
concepts like truthfulness.
Related papers
- MuLan: A Study of Fact Mutability in Language Models [50.626787909759976]
Trustworthy language models ideally identify mutable facts as such and process them accordingly.
We create MuLan, a benchmark for evaluating the ability of English language models to anticipate time-contingency.
arXiv Detail & Related papers (2024-04-03T19:47:33Z) - The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets [6.732432949368421]
Large Language Models (LLMs) have impressive capabilities, but are prone to outputting falsehoods.
Recent work has developed techniques for inferring whether a LLM is telling the truth by training probes on the LLM's internal activations.
We present evidence that at sufficient scale, LLMs linearly represent the truth or falsehood of factual statements.
arXiv Detail & Related papers (2023-10-10T17:54:39Z) - Physics of Language Models: Part 3.2, Knowledge Manipulation [51.68385617116854]
This paper investigates four fundamental knowledge manipulation tasks.
We show that language models excel in knowledge retrieval but struggle even in the simplest classification or comparison tasks.
Our findings also apply to modern pretrained language models such as GPT-4.
arXiv Detail & Related papers (2023-09-25T17:50:41Z) - Deduction under Perturbed Evidence: Probing Student Simulation
Capabilities of Large Language Models [27.943334687742244]
We show that even the most advanced GPT models struggle to reason on manipulated facts.
Our findings have practical implications for understanding the performance of LLMs in real-world applications.
arXiv Detail & Related papers (2023-05-23T20:26:03Z) - Discovering Latent Knowledge in Language Models Without Supervision [72.95136739040676]
Existing techniques for training language models can be misaligned with the truth.
We propose directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way.
We show that despite using no supervision and no model outputs, our method can recover diverse knowledge represented in large language models.
arXiv Detail & Related papers (2022-12-07T18:17:56Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Facts as Experts: Adaptable and Interpretable Neural Memory over
Symbolic Knowledge [38.48518306055536]
We develop a neural language model that includes an explicit interface between symbolically interpretable factual information and subsymbolic neural knowledge.
We show that this model dramatically improves performance on two knowledge-intensive question-answering tasks.
arXiv Detail & Related papers (2020-07-02T03:05:41Z) - Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge [96.92252296244233]
Large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control.
We show that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
arXiv Detail & Related papers (2020-06-11T17:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.