In-Context Principle Learning from Mistakes
- URL: http://arxiv.org/abs/2402.05403v2
- Date: Fri, 9 Feb 2024 19:09:52 GMT
- Title: In-Context Principle Learning from Mistakes
- Authors: Tianjun Zhang, Aman Madaan, Luyu Gao, Steven Zheng, Swaroop Mishra,
Yiming Yang, Niket Tandon, Uri Alon
- Abstract summary: Incontext learning (ICL) has been the standard method of adapting LLMs to downstream tasks, by learning from a few input-output examples.
We revisit this paradigm, by learning more from the few given input-output examples.
- Score: 75.66979331850364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In-context learning (ICL, also known as few-shot prompting) has been the
standard method of adapting LLMs to downstream tasks, by learning from a few
input-output examples. Nonetheless, all ICL-based approaches only learn from
correct input-output pairs. In this paper, we revisit this paradigm, by
learning more from the few given input-output examples. We introduce Learning
Principles (LEAP): First, we intentionally induce the model to make mistakes on
these few examples; then we reflect on these mistakes, and learn explicit
task-specific "principles" from them, which help solve similar problems and
avoid common mistakes; finally, we prompt the model to answer unseen test
questions using the original few-shot examples and these learned general
principles. We evaluate LEAP on a wide range of benchmarks, including multi-hop
question answering (Hotpot QA), textual QA (DROP), Big-Bench Hard reasoning,
and math problems (GSM8K and MATH); in all these benchmarks, LEAP improves the
strongest available LLMs such as GPT-3.5-turbo, GPT-4, GPT-4 turbo and
Claude-2.1. For example, LEAP improves over the standard few-shot prompting
using GPT-4 by 7.5% in DROP, and by 3.3% in HotpotQA. Importantly, LEAP does
not require any more input or examples than the standard few-shot prompting
settings.
Related papers
- Reasoning on Efficient Knowledge Paths:Knowledge Graph Guides Large Language Model for Domain Question Answering [18.94220625114711]
Large language models (LLMs) perform surprisingly well and outperform human experts on many tasks.
This paper integrates and optimized a pipeline for selecting reasoning paths from KG based on LLM.
We also propose a simple and effective subgraph retrieval method based on chain of thought (CoT) and page rank.
arXiv Detail & Related papers (2024-04-16T08:28:16Z) - Large Language Models are Contrastive Reasoners [8.427805316635318]
We show how contrastive prompting significantly improves the ability of large language models to perform complex reasoning.
Experiments on various large language models show that zero-shot contrastive prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks.
Our method not only surpasses zero-shot CoT and few-shot CoT in most arithmetic and commonsense reasoning tasks but also can seamlessly integrate with existing prompting methods.
arXiv Detail & Related papers (2024-03-13T03:15:05Z) - The Earth is Flat? Unveiling Factual Errors in Large Language Models [89.94270049334479]
Large Language Models (LLMs) like ChatGPT are in various applications due to their extensive knowledge from pre-training and fine-tuning.
Despite this, they are prone to generating factual and commonsense errors, raising concerns in critical areas like healthcare, journalism, and education.
We introduce a novel, automatic testing framework, FactChecker, aimed at uncovering factual inaccuracies in LLMs.
arXiv Detail & Related papers (2024-01-01T14:02:27Z) - PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task
Completion [96.47420221442397]
We introduce the PowerPoint Task Completion benchmark to assess the ability of Large Language Models to finish multi-turn, multi-modal instructions.
We also propose the PPTX-Match Evaluation System that evaluates if LLMs finish the instruction based on the prediction file rather than the label API sequence.
The results show that GPT-4 outperforms other LLMs with 75.1% accuracy in single-turn dialogue testing but faces challenges in completing entire sessions, achieving just 6% session accuracy.
arXiv Detail & Related papers (2023-11-03T08:06:35Z) - Large Language Model-Aware In-Context Learning for Code Generation [75.68709482932903]
Large language models (LLMs) have shown impressive in-context learning (ICL) ability in code generation.
We propose a novel learning-based selection approach named LAIL (LLM-Aware In-context Learning) for code generation.
arXiv Detail & Related papers (2023-10-15T06:12:58Z) - Large Language Models can Learn Rules [106.40747309894236]
We present Hypotheses-to-Theories (HtT), a framework that learns a rule library for reasoning with large language models (LLMs)
Experiments on relational reasoning, numerical reasoning and concept learning problems show that HtT improves existing prompting methods.
The learned rules are also transferable to different models and to different forms of the same problem.
arXiv Detail & Related papers (2023-10-10T23:07:01Z) - Investigating the Effectiveness of Task-Agnostic Prefix Prompt for
Instruction Following [44.701091969256055]
We present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference.
We observe that both base LLMs (i.e. not fine-tuned to follow instructions) and instruction-tuned models benefit from TAPP, resulting in 34.58% and 12.26% improvement on average.
arXiv Detail & Related papers (2023-02-28T16:06:35Z) - PAL: Program-aided Language Models [112.94785609781503]
We present Program-Aided Language models (PaL) to understand natural language problems.
PaL offloads the solution step to a programmatic runtime such as a Python interpreter.
We set new state-of-the-art results in all 12 benchmarks.
arXiv Detail & Related papers (2022-11-18T18:56:13Z) - Large Language Models are Zero-Shot Reasoners [28.6899375595088]
Chain of thought (CoT) prompting is a technique for eliciting complex multi-step reasoning through step-by-step answer examples.
We show that LLMs are decent zero-shot reasoners by simply adding Let's think step by step'' before each answer.
Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances.
arXiv Detail & Related papers (2022-05-24T09:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.