Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with
Large Language Models
- URL: http://arxiv.org/abs/2310.06692v3
- Date: Tue, 20 Feb 2024 15:27:20 GMT
- Title: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with
Large Language Models
- Authors: Anni Zou, Zhuosheng Zhang, Hai Zhao, Xiangru Tang
- Abstract summary: Large language models (LLMs) have unveiled remarkable reasoning capabilities by exploiting chain-of-thought (CoT) prompting.
We propose GeM-CoT, a Generalizable CoT prompting mechanism in Mixed-task scenarios where the type of input questions is unknown.
With this technical design, GeM-CoT simultaneously enjoys superior generalization capabilities and remarkable performances on 10 public reasoning tasks and 23 BBH tasks.
- Score: 68.05046964022844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have unveiled remarkable reasoning capabilities
by exploiting chain-of-thought (CoT) prompting, which generates intermediate
reasoning chains to serve as the rationale for deriving the answer. However,
current CoT methods either simply employ general prompts such as Let's think
step by step, or heavily rely on pre-defined task-specific demonstrations to
attain preferable performances, thereby engendering an inescapable gap between
performance and generalization. To bridge this gap, we propose GeM-CoT, a
Generalizable CoT prompting mechanism in Mixed-task scenarios where the type of
input questions is unknown. GeM-CoT first categorizes the question type and
subsequently samples or constructs demonstrations from the corresponding data
pool in an automatic pattern. With this technical design, GeM-CoT
simultaneously enjoys superior generalization capabilities and remarkable
performances on 10 public reasoning tasks and 23 BBH tasks.
Related papers
- Instance-adaptive Zero-shot Chain-of-Thought Prompting [32.700073951068575]
Zero-shot Chain-of-Thought (CoT) prompting emerges as a simple and effective strategy for enhancing the performance of large language models (LLMs) in real-world reasoning tasks.
This work introduces an instance-adaptive prompting algorithm as an alternative zero-shot CoT reasoning scheme by adaptively differentiating good and bad prompts.
arXiv Detail & Related papers (2024-09-30T16:00:34Z) - Chain of Thoughtlessness? An Analysis of CoT in Planning [17.329365493094542]
Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution.
This paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain.
We find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class.
arXiv Detail & Related papers (2024-05-08T02:48:28Z) - Pattern-Aware Chain-of-Thought Prompting in Large Language Models [26.641713417293538]
Chain-of-thought (CoT) prompting can guide language models to engage in complex multi-step reasoning.
We show that the underlying reasoning patterns play a more crucial role in such tasks.
We propose Pattern-Aware CoT, a prompting method that considers the diversity of demonstration patterns.
arXiv Detail & Related papers (2024-04-23T07:50:00Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Analyzing Chain-of-Thought Prompting in Large Language Models via
Gradient-based Feature Attributions [10.621564997491808]
Chain-of-thought (CoT) prompting has been shown to empirically improve the accuracy of large language models.
We investigate whether CoT prompting affects the relative importances they assign to particular input tokens.
Our results indicate that while CoT prompting does not increase the magnitude of saliency scores attributed to semantically relevant tokens in the prompt, it increases the robustness of saliency scores to question perturbations and variations in model output.
arXiv Detail & Related papers (2023-07-25T08:51:30Z) - Self-regulating Prompts: Foundational Model Adaptation without
Forgetting [112.66832145320434]
We introduce a self-regularization framework for prompting called PromptSRC.
PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations.
arXiv Detail & Related papers (2023-07-13T17:59:35Z) - Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models [81.01397924280612]
Large language models (LLMs) can achieve highly effective performance on various reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting as demonstrations.
We introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts Prompting), an iterative bootstrapping approach for selecting exemplars and generating reasoning chains.
arXiv Detail & Related papers (2023-04-23T13:54:39Z) - Improving Task Generalization via Unified Schema Prompt [87.31158568180514]
Unified Prompt is a flexible and prompting method, which automatically customizes the learnable prompts for each task according to the task input schema.
It models the shared knowledge between tasks, while keeping the characteristics of different task schema.
The framework achieves strong zero-shot and few-shot performance on 16 unseen tasks downstream from 8 task types.
arXiv Detail & Related papers (2022-08-05T15:26:36Z) - Reasoning over Hybrid Chain for Table-and-Text Open Domain QA [69.8436986668218]
We propose a ChAin-centric Reasoning and Pre-training framework (CARP)
CARP utilizes hybrid chain to model the explicit intermediate reasoning process across table and text for question answering.
We also propose a novel chain-centric pre-training method, to enhance the pre-trained model in identifying the cross-modality reasoning process.
arXiv Detail & Related papers (2022-01-15T16:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.