Can Large Language Models Truly Understand Prompts? A Case Study with
Negated Prompts
- URL: http://arxiv.org/abs/2209.12711v1
- Date: Mon, 26 Sep 2022 14:05:10 GMT
- Title: Can Large Language Models Truly Understand Prompts? A Case Study with
Negated Prompts
- Authors: Joel Jang, Seonghyeon Ye, Minjoon Seo
- Abstract summary: Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks.
In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law.
- Score: 19.43042432631113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous work has shown that there exists a scaling law between the size of
Language Models (LMs) and their zero-shot performance on different downstream
NLP tasks. In this work, we show that this phenomenon does not hold when
evaluating large LMs on tasks with negated prompts, but instead shows an
inverse scaling law. We evaluate 9 different tasks with negated prompts on (1)
pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further
pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with
few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all
LM types perform worse on negated prompts as they scale and show a huge
performance gap between the human performance when comparing the average score
on both original and negated prompts. By highlighting a critical limitation of
existing LMs and methods, we urge the community to develop new approaches of
developing LMs that actually follow the given instructions. We provide the code
and the datasets to explore negated prompts at
https://github.com/joeljang/negated-prompts-for-llms
Related papers
- Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought [51.240387516059535]
We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., 1B) language model (LM) for guiding a black-box large (i.e., >10B) LM in reasoning tasks.
We optimize the model through 1) knowledge distillation and 2) reinforcement learning from rationale-oriented and task-oriented reward signals.
arXiv Detail & Related papers (2024-04-04T12:46:37Z) - BAGEL: Bootstrapping Agents by Guiding Exploration with Language [19.08719671800276]
This work presents BAGEL, a method for bootstrapping language model (LM) agents without human supervision.
We use BAGEL demonstrations to adapt a zero shot LM agent at test time via in-context learning over retrieved demonstrations.
We find improvements of over 2-13% absolute on ToolQA and MiniWob++, with up to 13x reduction in execution failures.
arXiv Detail & Related papers (2024-03-12T23:59:15Z) - MemoryPrompt: A Light Wrapper to Improve Context Tracking in Pre-trained
Language Models [10.783764497590473]
Transformer-based language models (LMs) track contextual information through large, hard-coded input windows.
We introduce MemoryPrompt, a leaner approach in which the LM is complemented by a small auxiliary recurrent network that passes information to the LM by prefixing its regular input with a sequence of vectors.
tested on a task designed to probe a LM's ability to keep track of multiple fact updates, a MemoryPrompt-augmented LM outperforms much larger LMs that have access to the full input history.
arXiv Detail & Related papers (2024-02-23T11:30:39Z) - Small Language Model Can Self-correct [42.76612128849389]
We introduce the underlineIntrinsic underlineSelf-underlineCorrection (ISC) in generative language models, aiming to correct the initial output of LMs in a self-triggered manner.
We conduct experiments using LMs with parameters sizes ranging from 6 billion to 13 billion in two tasks, including commonsense reasoning and factual knowledge reasoning.
arXiv Detail & Related papers (2024-01-14T14:29:07Z) - AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations [52.43593893122206]
Alignedcot is an in-context learning technique for invoking Large Language Models.
It achieves consistent and correct step-wise prompts in zero-shot scenarios.
We conduct experiments on mathematical reasoning and commonsense reasoning.
arXiv Detail & Related papers (2023-11-22T17:24:21Z) - MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language
Models [64.70153487607172]
Language Models (LMs) have shown impressive performance in various natural language tasks.
When it comes to natural language reasoning, LMs still face challenges such as hallucination, generating incorrect intermediate reasoning steps, and making mathematical errors.
Recent research has focused on enhancing LMs through self-improvement using feedback.
In this work, we propose Multi-Aspect Feedback, an iterative refinement framework that integrates multiple feedback modules, including frozen LMs and external tools, each focusing on a specific error category.
arXiv Detail & Related papers (2023-10-19T02:32:39Z) - LeTI: Learning to Generate from Textual Interactions [60.425769582343506]
We explore LMs' potential to learn from textual interactions (LETI) that not only check their correctness with binary labels but also pinpoint and explain errors in their outputs through textual feedback.
Our focus is the code generation task, where the model produces code based on natural language instructions.
LETI iteratively fine-tunes the model, using the objective LM, on a concatenation of natural language instructions, LM-generated programs, and textual feedback.
arXiv Detail & Related papers (2023-05-17T15:53:31Z) - PromptBoosting: Black-Box Text Classification with Ten Forward Passes [61.38341243907045]
We describe PromptBoosting, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM's parameters, gradients, or hidden representations.
Experiments show that PromptBoosting achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.
arXiv Detail & Related papers (2022-12-19T06:04:54Z) - Discovering Language Model Behaviors with Model-Written Evaluations [18.24267922379281]
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave.
Here, we automatically generate evaluations with LMs.
We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size.
arXiv Detail & Related papers (2022-12-19T05:13:52Z) - PAL: Program-aided Language Models [112.94785609781503]
We present Program-Aided Language models (PaL) to understand natural language problems.
PaL offloads the solution step to a programmatic runtime such as a Python interpreter.
We set new state-of-the-art results in all 12 benchmarks.
arXiv Detail & Related papers (2022-11-18T18:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.