Simple synthetic data reduces sycophancy in large language models
- URL: http://arxiv.org/abs/2308.03958v2
- Date: Thu, 15 Feb 2024 01:03:13 GMT
- Title: Simple synthetic data reduces sycophancy in large language models
- Authors: Jerry Wei and Da Huang and Yifeng Lu and Denny Zhou and Quoc V. Le
- Abstract summary: We study the prevalence of sycophancy in language models.
Sycophancy is where models tailor their responses to follow a human user's view even when that view is not objectively correct.
- Score: 88.4435858554904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sycophancy is an undesirable behavior where models tailor their responses to
follow a human user's view even when that view is not objectively correct
(e.g., adapting liberal views once a user reveals that they are liberal). In
this paper, we study the prevalence of sycophancy in language models and
propose a simple synthetic-data intervention to reduce this behavior.
First, on a set of three sycophancy tasks (Perez et al., 2022) where models
are asked for an opinion on statements with no correct answers (e.g.,
politics), we observe that both model scaling and instruction tuning
significantly increase sycophancy for PaLM models up to 540B parameters.
Second, we extend sycophancy evaluations to simple addition statements that are
objectively incorrect, finding that despite knowing that these statements are
wrong, language models will still agree with them if the user does as well.
To reduce sycophancy, we present a straightforward synthetic-data
intervention that takes public NLP tasks and encourages models to be robust to
user opinions on these tasks. Adding these data in a lightweight finetuning
step can significantly reduce sycophantic behavior on held-out prompts. Code
for generating synthetic data for intervention can be found at
https://github.com/google/sycophancy-intervention.
Related papers
- Chatting Up Attachment: Using LLMs to Predict Adult Bonds [0.0]
We use GPT-4 and Claude 3 Opus to create agents that simulate adults with varying profiles, childhood memories, and attachment styles.
We evaluate our models using a transcript dataset from 9 humans who underwent the same interview protocol, analyzed and labeled by mental health professionals.
Our findings indicate that training the models using only synthetic data achieves performance comparable to training the models on human data.
arXiv Detail & Related papers (2024-08-31T04:29:19Z) - Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences [20.629333587044012]
We study the impact of data curation on iterated retraining of generative models.
We prove that, if the data is curated according to a reward model, the expected reward of the iterative retraining procedure is maximized.
arXiv Detail & Related papers (2024-06-12T21:28:28Z) - Limitations of Agents Simulated by Predictive Models [1.6649383443094403]
We outline two structural reasons for why predictive models can fail when turned into agents.
We show that both of those failures are fixed by including a feedback loop from the environment.
Our treatment provides a unifying view of those failure modes, and informs the question of why fine-tuning offline learned policies with online learning makes them more effective.
arXiv Detail & Related papers (2024-02-08T17:08:08Z) - Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models [115.501751261878]
Fine-tuning language models(LMs) on human-generated data remains a prevalent practice.
We investigate whether we can go beyond human data on tasks where we have access to scalar feedback.
We find that ReST$EM$ scales favorably with model size and significantly surpasses fine-tuning only on human data.
arXiv Detail & Related papers (2023-12-11T18:17:43Z) - Fine-tuning Language Models for Factuality [96.5203774943198]
Large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines.
Yet language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations'
In this work, we fine-tune language models to be more factual, without human labeling.
arXiv Detail & Related papers (2023-11-14T18:59:15Z) - Are aligned neural networks adversarially aligned? [93.91072860401856]
adversarial users can construct inputs which circumvent attempts at alignment.
We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models.
We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.
arXiv Detail & Related papers (2023-06-26T17:18:44Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
Explanations of Their Behavior in Natural Language? [86.60613602337246]
We introduce a leakage-adjusted simulatability (LAS) metric for evaluating NL explanations.
LAS measures how well explanations help an observer predict a model's output, while controlling for how explanations can directly leak the output.
We frame explanation generation as a multi-agent game and optimize explanations for simulatability while penalizing label leakage.
arXiv Detail & Related papers (2020-10-08T16:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.