Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs
- URL: http://arxiv.org/abs/2404.15676v1
- Date: Wed, 24 Apr 2024 06:12:00 GMT
- Title: Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs
- Authors: Yu Xia, Rui Wang, Xu Liu, Mingyan Li, Tong Yu, Xiang Chen, Julian McAuley, Shuai Li,
- Abstract summary: Chain-of-Thought (CoT) has been a widely adopted prompting method, eliciting impressive reasoning abilities of Large Language Models (LLMs)
Inspired by the sequential thought structure of CoT, a number of Chain-of-X (CoX) methods have been developed to address various challenges across diverse domains and tasks involving LLMs.
- Score: 39.214512676276726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chain-of-Thought (CoT) has been a widely adopted prompting method, eliciting impressive reasoning abilities of Large Language Models (LLMs). Inspired by the sequential thought structure of CoT, a number of Chain-of-X (CoX) methods have been developed to address various challenges across diverse domains and tasks involving LLMs. In this paper, we provide a comprehensive survey of Chain-of-X methods for LLMs in different contexts. Specifically, we categorize them by taxonomies of nodes, i.e., the X in CoX, and application tasks. We also discuss the findings and implications of existing CoX methods, as well as potential future directions. Our survey aims to serve as a detailed and up-to-date resource for researchers seeking to apply the idea of CoT to broader scenarios.
Related papers
- ChainLM: Empowering Large Language Models with Improved Chain-of-Thought Prompting [124.69672273754144]
Chain-of-Thought (CoT) prompting can enhance the reasoning capabilities of large language models (LLMs)
Existing CoT approaches usually focus on simpler reasoning tasks and thus result in low-quality and inconsistent CoT prompts.
We introduce CoTGenius, a novel framework designed for the automatic generation of superior CoT prompts.
arXiv Detail & Related papers (2024-03-21T11:34:26Z) - ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis [20.24915029448926]
Large language models (LLMs) have achieved commendable accomplishments in various natural language processing tasks.
These challenges arise from the presence of implicit relationships that demand multi-step reasoning.
We propose a novel approach ERA-CoT, which aids LLMs in understanding context by capturing relationships between entities.
arXiv Detail & Related papers (2024-03-11T17:18:53Z) - Benchmarking Large Language Models on Controllable Generation under
Diversified Instructions [34.89012022437519]
Large language models (LLMs) have exhibited impressive instruction-following capabilities.
It is still unclear whether and to what extent they can respond to explicit constraints that might be entailed in various instructions.
We propose a new benchmark CoDI-Eval to evaluate LLMs' responses to instructions with various constraints.
arXiv Detail & Related papers (2024-01-01T07:35:31Z) - Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts [65.15322403136238]
We propose XoT, an integrated problem solving framework by prompting LLMs with diverse reasoning thoughts.
For each question, XoT always begins with selecting the most suitable method then executes each method iteratively.
Within each iteration, XoT actively checks the validity of the generated answer and incorporates the feedback from external executors.
arXiv Detail & Related papers (2023-10-23T07:02:20Z) - CoF-CoT: Enhancing Large Language Models with Coarse-to-Fine
Chain-of-Thought Prompting for Multi-domain NLU Tasks [46.862929778121675]
Chain-of-Thought prompting is popular in reasoning tasks, but its application to Natural Language Understanding (NLU) is under-explored.
Motivated by multi-step reasoning of Large Language Models (LLMs), we propose Coarse-to-Fine Chain-of-Thought (CoF-CoT) approach.
arXiv Detail & Related papers (2023-10-23T06:54:51Z) - Towards Better Chain-of-Thought Prompting Strategies: A Survey [60.75420407216108]
Chain-of-Thought (CoT) shows its impressive strength when used as a prompting strategy for large language models (LLM)
Recent years, the prominent effect of CoT prompting has attracted emerging research.
This survey could provide an overall reference on related research.
arXiv Detail & Related papers (2023-10-08T01:16:55Z) - Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context
Reasoning with Language Models [58.41943058963672]
We propose a new inference framework called Recursion of Thought (RoT)
RoT introduces several special tokens that the models can output to trigger context-related operations.
Experiments with multiple architectures including GPT-3 show that RoT dramatically improves LMs' inference capability to solve problems.
arXiv Detail & Related papers (2023-06-12T06:34:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.