Synthetic Prompting: Generating Chain-of-Thought Demonstrations for
Large Language Models
- URL: http://arxiv.org/abs/2302.00618v1
- Date: Wed, 1 Feb 2023 17:33:12 GMT
- Title: Synthetic Prompting: Generating Chain-of-Thought Demonstrations for
Large Language Models
- Authors: Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu
Chen
- Abstract summary: Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations.
We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself.
We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.
- Score: 121.54462976635743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models can perform various reasoning tasks by using
chain-of-thought prompting, which guides them to find answers through
step-by-step demonstrations. However, the quality of the prompts depends on the
demonstrations given to the models, and creating many of them by hand is
costly. We introduce Synthetic prompting, a method that leverages a few
handcrafted examples to prompt the model to generate more examples by itself,
and selects effective demonstrations to elicit better reasoning. Our method
alternates between a backward and forward process to generate new examples. The
backward process generates a question that match a sampled reasoning chain, so
that the question is solvable and clear. The forward process produces a more
detailed reasoning chain for the question, improving the quality of the
example. We evaluate our method on numerical, symbolic, and algorithmic
reasoning tasks, and show that it outperforms existing prompting techniques.
Related papers
- Self-Harmonized Chain of Thought [8.540320749424172]
Chain-of-Thought (CoT) prompting reveals that large language models are capable of performing complex reasoning via intermediate steps.
ECHO demonstrates the best overall performance across three reasoning domains.
arXiv Detail & Related papers (2024-09-06T06:57:04Z) - Pattern-Aware Chain-of-Thought Prompting in Large Language Models [26.641713417293538]
Chain-of-thought (CoT) prompting can guide language models to engage in complex multi-step reasoning.
We show that the underlying reasoning patterns play a more crucial role in such tasks.
We propose Pattern-Aware CoT, a prompting method that considers the diversity of demonstration patterns.
arXiv Detail & Related papers (2024-04-23T07:50:00Z) - Soft-Prompting with Graph-of-Thought for Multi-modal Representation Learning [45.517215214938844]
Chain-of-thought technique has been received well in multi-modal tasks.
We propose a novel Aggregation-Graph-of-Thought (AGoT) mechanism for soft-prompt tuning in multi-modal representation learning.
arXiv Detail & Related papers (2024-04-06T07:39:44Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning [53.52699766206808]
We propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning.
We evaluate RetICL on math word problem solving and scientific question answering tasks and show that it consistently outperforms or matches and learnable baselines.
arXiv Detail & Related papers (2023-05-23T20:15:56Z) - Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models [81.01397924280612]
Large language models (LLMs) can achieve highly effective performance on various reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting as demonstrations.
We introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts Prompting), an iterative bootstrapping approach for selecting exemplars and generating reasoning chains.
arXiv Detail & Related papers (2023-04-23T13:54:39Z) - Automatic Chain of Thought Prompting in Large Language Models [20.54898481696753]
Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps.
"Let's think step by step" prompt generates reasoning chains for demonstrations one by one.
We propose an automatic CoT prompting method: Auto-CoT.
arXiv Detail & Related papers (2022-10-07T12:28:21Z) - Learning to Reason With Relational Abstractions [65.89553417442049]
We study how to build stronger reasoning capability in language models using the idea of relational abstractions.
We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy.
arXiv Detail & Related papers (2022-10-06T00:27:50Z) - Complexity-Based Prompting for Multi-Step Reasoning [72.0057198610614]
We study the task of prompting large-scale language models to perform multi-step reasoning.
A central question is which reasoning examples make the most effective prompts.
We propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning.
arXiv Detail & Related papers (2022-10-03T05:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.