Task Ambiguity in Humans and Language Models
- URL: http://arxiv.org/abs/2212.10711v1
- Date: Tue, 20 Dec 2022 18:35:33 GMT
- Title: Task Ambiguity in Humans and Language Models
- Authors: Alex Tamkin, Kunal Handa, Avash Shrestha, Noah Goodman
- Abstract summary: We propose AmbiBench, a new benchmark of ambiguously-specified classification tasks.
We evaluate humans and models on AmbiBench by seeing how well they identify the intended task.
We show how to dramatically improve the accuracy of language models trained without large-scale human feedback training.
- Score: 7.033374427612259
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Language models have recently achieved strong performance across a wide range
of NLP benchmarks. However, unlike benchmarks, real world tasks are often
poorly specified, and agents must deduce the user's intended behavior from a
combination of context, instructions, and examples. We investigate how both
humans and models behave in the face of such task ambiguity by proposing
AmbiBench, a new benchmark of six ambiguously-specified classification tasks.
We evaluate humans and models on AmbiBench by seeing how well they identify the
intended task using 1) instructions with varying degrees of ambiguity, and 2)
different numbers of labeled examples. We find that the combination of model
scaling (to 175B parameters) and training with human feedback data enables
models to approach or exceed the accuracy of human participants across tasks,
but that either one alone is not sufficient. In addition, we show how to
dramatically improve the accuracy of language models trained without
large-scale human feedback training by finetuning on a small number of
ambiguous in-context examples, providing a promising direction for teaching
models to generalize well in the face of ambiguity.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.