Discovering Language Model Behaviors with Model-Written Evaluations
- URL: http://arxiv.org/abs/2212.09251v1
- Date: Mon, 19 Dec 2022 05:13:52 GMT
- Title: Discovering Language Model Behaviors with Model-Written Evaluations
- Authors: Ethan Perez, Sam Ringer, Kamil\.e Luko\v{s}i\=ut\.e, Karina Nguyen,
Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu,
Saurav Kadavath, Andy Jones, Anna Chen, Ben Mann, Brian Israel, Bryan
Seethor, Cameron McKinnon, Christopher Olah, Da Yan, Daniela Amodei, Dario
Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Guro Khundadze, Jackson
Kernion, James Landis, Jamie Kerr, Jared Mueller, Jeeyoon Hyun, Joshua
Landau, Kamal Ndousse, Landon Goldberg, Liane Lovitt, Martin Lucas, Michael
Sellitto, Miranda Zhang, Neerav Kingsland, Nelson Elhage, Nicholas Joseph,
Noem\'i Mercado, Nova DasSarma, Oliver Rausch, Robin Larson, Sam McCandlish,
Scott Johnston, Shauna Kravec, Sheer El Showk, Tamera Lanham, Timothy
Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac
Hatfield-Dodds, Jack Clark, Samuel R. Bowman, Amanda Askell, Roger Grosse,
Danny Hernandez, Deep Ganguli, Evan Hubinger, Nicholas Schiefer, Jared Kaplan
- Abstract summary: As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave.
Here, we automatically generate evaluations with LMs.
We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size.
- Score: 18.24267922379281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As language models (LMs) scale, they develop many novel behaviors, good and
bad, exacerbating the need to evaluate how they behave. Prior work creates
evaluations with crowdwork (which is time-consuming and expensive) or existing
data sources (which are not always available). Here, we automatically generate
evaluations with LMs. We explore approaches with varying amounts of human
effort, from instructing LMs to write yes/no questions to making complex
Winogender schemas with multiple stages of LM-based generation and filtering.
Crowdworkers rate the examples as highly relevant and agree with 90-100% of
labels, sometimes more so than corresponding human-written datasets. We
generate 154 datasets and discover new cases of inverse scaling where LMs get
worse with size. Larger LMs repeat back a dialog user's preferred answer
("sycophancy") and express greater desire to pursue concerning goals like
resource acquisition and goal preservation. We also find some of the first
examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF
makes LMs worse. For example, RLHF makes LMs express stronger political views
(on gun rights and immigration) and a greater desire to avoid shut down.
Overall, LM-written evaluations are high-quality and let us quickly discover
many novel LM behaviors.
Related papers
- Belief Revision: The Adaptability of Large Language Models Reasoning [63.0281286287648]
We introduce Belief-R, a new dataset designed to test LMs' belief revision ability when presented with new evidence.
Inspired by how humans suppress prior inferences, this task assesses LMs within the newly proposed delta reasoning framework.
We evaluate $sim$30 LMs across diverse prompting strategies and found that LMs generally struggle to appropriately revise their beliefs in response to new information.
arXiv Detail & Related papers (2024-06-28T09:09:36Z) - Evaluating $n$-Gram Novelty of Language Models Using Rusty-DAWG [57.14250086701313]
We investigate the extent to which modern LMs generate $n$-grams from their training data.
We develop Rusty-DAWG, a novel search tool inspired by indexing of genomic data.
arXiv Detail & Related papers (2024-06-18T21:31:19Z) - CaLM: Contrasting Large and Small Language Models to Verify Grounded Generation [76.31621715032558]
Grounded generation aims to equip language models (LMs) with the ability to produce more credible and accountable responses.
We introduce CaLM, a novel verification framework.
Our framework empowers smaller LMs, which rely less on parametric memory, to validate the output of larger LMs.
arXiv Detail & Related papers (2024-06-08T06:04:55Z) - Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought [51.240387516059535]
We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., 1B) language model (LM) for guiding a black-box large (i.e., >10B) LM in reasoning tasks.
We optimize the model through 1) knowledge distillation and 2) reinforcement learning from rationale-oriented and task-oriented reward signals.
arXiv Detail & Related papers (2024-04-04T12:46:37Z) - Reliable, Adaptable, and Attributable Language Models with Retrieval [144.26890121729514]
Parametric language models (LMs) are trained on vast amounts of web data.
They face practical challenges such as hallucinations, difficulty in adapting to new data distributions, and a lack of verifiability.
We advocate for retrieval-augmented LMs to replace parametric LMs as the next generation of LMs.
arXiv Detail & Related papers (2024-03-05T18:22:33Z) - Ranking Large Language Models without Ground Truth [24.751931637152524]
Evaluation and ranking of large language models (LLMs) has become an important problem with the proliferation of these models.
We provide a novel perspective where, given a dataset of prompts, we rank them without access to any ground truth or reference responses.
Applying this idea repeatedly, we propose two methods to rank LLMs.
arXiv Detail & Related papers (2024-02-21T00:49:43Z) - Retrieval-Pretrained Transformer: Long-range Language Modeling with Self-retrieval [51.437420003471615]
We propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch.
RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
arXiv Detail & Related papers (2023-06-23T10:18:02Z) - Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented
Large Language Models [6.425088990363101]
We examine the relationship between fluency and attribution in Large Language Models prompted with retrieved evidence.
We show that larger models tend to do much better in both fluency and attribution.
We propose a recipe that could allow smaller models to both close the gap with larger models and preserve the benefits of top-k retrieval.
arXiv Detail & Related papers (2023-02-11T02:43:34Z) - Can Large Language Models Truly Understand Prompts? A Case Study with
Negated Prompts [19.43042432631113]
Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks.
In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law.
arXiv Detail & Related papers (2022-09-26T14:05:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.