The Art of SOCRATIC QUESTIONING: Recursive Thinking with Large Language
Models
- URL: http://arxiv.org/abs/2305.14999v2
- Date: Thu, 2 Nov 2023 02:40:48 GMT
- Title: The Art of SOCRATIC QUESTIONING: Recursive Thinking with Large Language
Models
- Authors: Jingyuan Qi, Zhiyang Xu, Ying Shen, Minqian Liu, Di Jin, Qifan Wang,
Lifu Huang
- Abstract summary: Chain-of-Thought (CoT) prompting enables large language models to solve complex reasoning problems by generating intermediate steps.
We propose SOCRATIC QUESTIONING, a divide-and-conquer style algorithm that mimics the recursive thinking process.
- Score: 45.01562498702836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chain-of-Thought (CoT) prompting enables large language models to solve
complex reasoning problems by generating intermediate steps. However, confined
by its inherent single-pass and sequential generation process, CoT heavily
relies on the initial decisions, causing errors in early steps to accumulate
and impact the final answers. In contrast, humans adopt recursive thinking when
tackling complex reasoning problems, i.e., iteratively breaking the original
problem into approachable sub-problems and aggregating their answers to resolve
the original one. Inspired by the human cognitive process, we propose SOCRATIC
QUESTIONING, a divide-and-conquer style algorithm that mimics the recursive
thinking process. Specifically, SOCRATIC QUESTIONING leverages large language
models to raise and answer sub-questions until collecting enough information to
tackle the original question. Unlike CoT, SOCRATIC QUESTIONING explicitly
navigates the thinking space, stimulates effective recursive thinking, and is
more robust towards errors in the thinking process. Extensive experiments on
several complex reasoning tasks, including MMLU, MATH, LogiQA, and visual
question-answering demonstrate significant performance improvements over the
state-of-the-art prompting methods, such as CoT, and Tree-of-Thought. The
qualitative analysis clearly shows that the intermediate reasoning steps
elicited by SOCRATIC QUESTIONING are similar to humans' recursively thinking
process of complex reasoning problems.
Related papers
- Supervised Chain of Thought [5.389461633686935]
Chain of Thought (CoT) prompting offers a promising approach to solving complex reasoning tasks.
One-prompt-for-all approach poses significant challenges for models to generate the correct reasoning steps.
We show how task-specific supervision is essential for navigating the prompt space accurately and achieving optimal performance.
arXiv Detail & Related papers (2024-10-18T06:25:27Z) - Advancing Algorithmic Approaches to Probabilistic Argumentation under the Constellation Approach [0.0]
We develop an algorithm for the complex task of computing the probability of a set of arguments being a complete extension.
An experimental evaluation shows promise of our approach.
arXiv Detail & Related papers (2024-07-06T12:08:38Z) - Chain of Thoughtlessness? An Analysis of CoT in Planning [17.329365493094542]
Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution.
This paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain.
We find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class.
arXiv Detail & Related papers (2024-05-08T02:48:28Z) - Boosting of Thoughts: Trial-and-Error Problem Solving with Large
Language Models [48.43678591317425]
Boosting of Thoughts (BoT) is an automated prompting framework for problem solving with Large Language Models.
We show that BoT consistently achieves higher or comparable problem-solving rates than other advanced prompting approaches.
arXiv Detail & Related papers (2024-02-17T00:13:36Z) - Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context
Reasoning with Language Models [58.41943058963672]
We propose a new inference framework called Recursion of Thought (RoT)
RoT introduces several special tokens that the models can output to trigger context-related operations.
Experiments with multiple architectures including GPT-3 show that RoT dramatically improves LMs' inference capability to solve problems.
arXiv Detail & Related papers (2023-06-12T06:34:16Z) - Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement [50.62461749446111]
Self-Polish (SP) is a novel method that facilitates the model's reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable.
SP is to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement.
arXiv Detail & Related papers (2023-05-23T19:58:30Z) - Chaining Simultaneous Thoughts for Numerical Reasoning [92.2007997126144]
numerical reasoning over text should be an essential skill of AI systems.
Previous work focused on modeling the structures of equations, and has proposed various structured decoders.
We propose CANTOR, a numerical reasoner that models reasoning steps using a directed acyclic graph.
arXiv Detail & Related papers (2022-11-29T18:52:06Z) - End-to-end Algorithm Synthesis with Recurrent Networks: Logical
Extrapolation Without Overthinking [52.05847268235338]
We show how machine learning systems can perform logical extrapolation without overthinking problems.
We propose a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten.
We also employ a progressive training routine that prevents the model from learning behaviors that are specific to number and instead pushes it to learn behaviors that can be repeated indefinitely.
arXiv Detail & Related papers (2022-02-11T18:43:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.