Large Language Models as Analogical Reasoners
- URL: http://arxiv.org/abs/2310.01714v3
- Date: Sat, 9 Mar 2024 05:54:39 GMT
- Title: Large Language Models as Analogical Reasoners
- Authors: Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure
Leskovec, Percy Liang, Ed H. Chi, Denny Zhou
- Abstract summary: Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
- Score: 155.9617224350088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chain-of-thought (CoT) prompting for language models demonstrates impressive
performance across reasoning tasks, but typically needs labeled exemplars of
the reasoning process. In this work, we introduce a new prompting approach,
analogical prompting, designed to automatically guide the reasoning process of
large language models. Inspired by analogical reasoning, a cognitive process in
which humans draw from relevant past experiences to tackle new problems, our
approach prompts language models to self-generate relevant exemplars or
knowledge in the context, before proceeding to solve the given problem. This
method presents several advantages: it obviates the need for labeling or
retrieving exemplars, offering generality and convenience; it can also tailor
the generated exemplars and knowledge to each problem, offering adaptability.
Experimental results show that our approach outperforms 0-shot CoT and manual
few-shot CoT in a variety of reasoning tasks, including math problem solving in
GSM8K and MATH, code generation in Codeforces, and other reasoning tasks in
BIG-Bench.
Related papers
- Pattern-Aware Chain-of-Thought Prompting in Large Language Models [26.641713417293538]
Chain-of-thought (CoT) prompting can guide language models to engage in complex multi-step reasoning.
We show that the underlying reasoning patterns play a more crucial role in such tasks.
We propose Pattern-Aware CoT, a prompting method that considers the diversity of demonstration patterns.
arXiv Detail & Related papers (2024-04-23T07:50:00Z) - Boosting of Thoughts: Trial-and-Error Problem Solving with Large
Language Models [48.43678591317425]
Boosting of Thoughts (BoT) is an automated prompting framework for problem solving with Large Language Models.
We show that BoT consistently achieves higher or comparable problem-solving rates than other advanced prompting approaches.
arXiv Detail & Related papers (2024-02-17T00:13:36Z) - OLaLa: Ontology Matching with Large Language Models [2.211868306499727]
Ontology Matching is a challenging task where information in natural language is one of the most important signals to process.
With the rise of Large Language Models, it is possible to incorporate this knowledge in a better way into the matching pipeline.
We show that with only a handful of examples and a well-designed prompt, it is possible to achieve results that are en par with supervised matching systems.
arXiv Detail & Related papers (2023-11-07T09:34:20Z) - RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning [53.52699766206808]
We propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning.
We evaluate RetICL on math word problem solving and scientific question answering tasks and show that it consistently outperforms or matches and learnable baselines.
arXiv Detail & Related papers (2023-05-23T20:15:56Z) - Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement [50.62461749446111]
Self-Polish (SP) is a novel method that facilitates the model's reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable.
SP is to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement.
arXiv Detail & Related papers (2023-05-23T19:58:30Z) - Synthetic Prompting: Generating Chain-of-Thought Demonstrations for
Large Language Models [121.54462976635743]
Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations.
We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself.
We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.
arXiv Detail & Related papers (2023-02-01T17:33:12Z) - Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango [11.344587937052697]
This work initiates the preliminary steps towards a deeper understanding of reasoning mechanisms in large language models.
Our work centers around querying the model while controlling for all but one of the components in a prompt: symbols, patterns, and text.
We posit that text imbues patterns with commonsense knowledge and meaning.
arXiv Detail & Related papers (2022-09-16T02:54:00Z) - Chain of Thought Prompting Elicits Reasoning in Large Language Models [56.811278668446825]
This paper explores the ability of language models to generate a coherent chain of thought.
Experiments show that inducing a chain of thought via prompting can enable sufficiently large language models to better perform reasoning tasks.
arXiv Detail & Related papers (2022-01-28T02:33:07Z) - Prompt Programming for Large Language Models: Beyond the Few-Shot
Paradigm [0.0]
We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language.
We introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks.
arXiv Detail & Related papers (2021-02-15T05:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.