Tab-CoT: Zero-shot Tabular Chain of Thought
- URL: http://arxiv.org/abs/2305.17812v1
- Date: Sun, 28 May 2023 20:49:52 GMT
- Title: Tab-CoT: Zero-shot Tabular Chain of Thought
- Authors: Ziqi Jin and Wei Lu
- Abstract summary: We propose Tab-CoT, which allows the complex reasoning process to be explicitly modelled in a highly structured manner.
Despite its simplicity, we show that our approach is capable of performing reasoning across multiple dimensions.
We demonstrate our approach's strong zero-shot and few-shot capabilities through extensive experiments on a range of reasoning tasks.
- Score: 7.558415495951758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The chain-of-though (CoT) prompting methods were successful in various
natural language processing (NLP) tasks thanks to their ability to unveil the
underlying complex reasoning processes. Such reasoning processes typically
exhibit implicitly structured steps. Recent efforts also started investigating
methods to encourage more explicitly structured reasoning procedures to be
captured. In this work, we propose Tab-CoT, a novel tabular-format CoT
prompting method, which allows the complex reasoning process to be explicitly
modelled in a highly structured manner. Despite its simplicity, we show that
our approach is capable of performing reasoning across multiple dimensions
(i.e., both rows and columns). We demonstrate our approach's strong zero-shot
and few-shot capabilities through extensive experiments on a range of reasoning
tasks.
Related papers
- Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought Reasoning in Large Language Models [56.37421741507468]
Chain-of-Thought (CoT) reasoning has significantly enhanced the performance of large language models (LLMs)
We propose a method to identify critical reasoning steps using perplexity as a measure of their importance.
arXiv Detail & Related papers (2025-02-18T20:04:51Z) - Watch Your Steps: Observable and Modular Chains of Thought [36.79118554877861]
We propose a variant of chain of thought (CoT) prompting called Program Trace Prompting.
It makes explanations more observable while preserving the power, generality and flexibility of CoT.
Program Trace Prompting is applicable to many tasks, achieving strong results on the 23 diverse tasks in the BIG-Bench Hard benchmark.
arXiv Detail & Related papers (2024-09-17T23:47:20Z) - Pattern-Aware Chain-of-Thought Prompting in Large Language Models [26.641713417293538]
Chain-of-thought (CoT) prompting can guide language models to engage in complex multi-step reasoning.
We show that the underlying reasoning patterns play a more crucial role in such tasks.
We propose Pattern-Aware CoT, a prompting method that considers the diversity of demonstration patterns.
arXiv Detail & Related papers (2024-04-23T07:50:00Z) - Chain-of-Thought Reasoning Without Prompting [40.92854235219315]
CoT reasoning paths can be elicited from pre-trained language models by simply altering the textitdecoding process.
The presence of a CoT in the decoding path correlates with a higher confidence in the model's decoded answer.
arXiv Detail & Related papers (2024-02-15T18:55:41Z) - Large Language Models as an Indirect Reasoner: Contrapositive and Contradiction for Automated Reasoning [74.90592233107712]
We propose a Direct-Indirect Reasoning (DIR) method, which considers Direct Reasoning (DR) and Indirect Reasoning (IR) as multiple parallel reasoning paths that are merged to derive the final answer.
Our DIR method is simple yet effective and can be straightforwardly integrated with existing variants of CoT methods.
arXiv Detail & Related papers (2024-02-06T03:41:12Z) - Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure [66.33623392497599]
We show that a structure called template-content structure (T-C structure) can reduce the possible space from exponential level to linear level.
We demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic.
arXiv Detail & Related papers (2023-10-09T06:57:45Z) - Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context
Reasoning with Language Models [58.41943058963672]
We propose a new inference framework called Recursion of Thought (RoT)
RoT introduces several special tokens that the models can output to trigger context-related operations.
Experiments with multiple architectures including GPT-3 show that RoT dramatically improves LMs' inference capability to solve problems.
arXiv Detail & Related papers (2023-06-12T06:34:16Z) - Towards Understanding Chain-of-Thought Prompting: An Empirical Study of
What Matters [82.84696222087396]
Chain-of-Thought (CoT) prompting can dramatically improve the multi-step reasoning abilities of large language models (LLMs)
We show that CoT reasoning is possible even with invalid demonstrations.
arXiv Detail & Related papers (2022-12-20T05:20:54Z) - Complexity-Based Prompting for Multi-Step Reasoning [72.0057198610614]
We study the task of prompting large-scale language models to perform multi-step reasoning.
A central question is which reasoning examples make the most effective prompts.
We propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning.
arXiv Detail & Related papers (2022-10-03T05:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.