Everything of Thoughts: Defying the Law of Penrose Triangle for Thought
Generation
- URL: http://arxiv.org/abs/2311.04254v3
- Date: Fri, 23 Feb 2024 15:09:58 GMT
- Title: Everything of Thoughts: Defying the Law of Penrose Triangle for Thought
Generation
- Authors: Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang,
Si Qin, Saravan Rajmohan, Qingwei Lin and Dongmei Zhang
- Abstract summary: We introduce a novel thought prompting approach called "Everything of Thoughts" (XoT) to defy the law of "Penrose triangle of existing thought paradigms.
XoT leverages pretrained reinforcement learning and Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge into thoughts.
We evaluate XoT on several challenging multi-solution problem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube.
- Score: 42.472954457731355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in Large Language Models (LLMs) have revolutionized
decision-making by breaking down complex problems into more manageable language
sequences referred to as "thoughts". An effective thought design should
consider three key perspectives: performance, efficiency, and flexibility.
However, existing thought can at most exhibit two of these attributes. To
address these limitations, we introduce a novel thought prompting approach
called "Everything of Thoughts" (XoT) to defy the law of "Penrose triangle of
existing thought paradigms. XoT leverages pretrained reinforcement learning and
Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge into
thoughts, thereby enhancing LLMs' capabilities and enabling them to generalize
to unseen problems efficiently. Through the utilization of the MCTS-LLM
collaborative thought revision framework, this approach autonomously produces
high-quality comprehensive cognitive mappings with minimal LLM interactions.
Additionally, XoT empowers LLMs to engage in unconstrained thinking, allowing
for flexible cognitive mappings for problems with multiple solutions. We
evaluate XoT on several challenging multi-solution problem-solving tasks,
including Game of 24, 8-Puzzle, and Pocket Cube. Our results demonstrate that
XoT significantly outperforms existing approaches. Notably, XoT can yield
multiple solutions with just one LLM call, showcasing its remarkable
proficiency in addressing complex problems across diverse domains.
Related papers
- Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs [86.79757571440082]
Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable abilities in complex reasoning tasks.
We identify a phenomenon we term underthinking, where o1-like LLMs frequently switch between different reasoning thoughts.
We propose a decoding strategy with thought switching penalty TIP that discourages premature transitions between thoughts.
arXiv Detail & Related papers (2025-01-30T18:58:18Z) - Imagine while Reasoning in Space: Multimodal Visualization-of-Thought [70.74453180101365]
Chain-of-Thought (CoT) prompting has proven highly effective for enhancing complex reasoning in Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs)
We propose a new reasoning paradigm, Multimodal Visualization-of-Thought (MVoT)
It enables visual thinking in MLLMs by generating image visualizations of their reasoning traces.
arXiv Detail & Related papers (2025-01-13T18:23:57Z) - Thinking with Many Minds: Using Large Language Models for Multi-Perspective Problem-Solving [2.1175632266708733]
Complex problem-solving requires the capacity to entertain multiple perspectives while preserving their distinctiveness.
We propose synthetic deliberation, a method that simulates discourse between agents embodying diverse perspectives.
This approach shows promise for strategic planning, policymaking, and conflict resolution.
arXiv Detail & Related papers (2025-01-04T18:04:47Z) - BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts [59.83547898874152]
We introduce BloomWise, a new prompting technique, inspired by Bloom's taxonomy, to improve the performance of Large Language Models (LLMs)
The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM.
In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-10-05T09:27:52Z) - Boosting of Thoughts: Trial-and-Error Problem Solving with Large Language Models [43.09706839884221]
Boosting of Thoughts (BoT) is an automated prompting framework for problem solving with Large Language Models.
We show that BoT consistently achieves higher or comparable problem-solving rates than other advanced prompting approaches.
arXiv Detail & Related papers (2024-02-17T00:13:36Z) - MacGyver: Are Large Language Models Creative Problem Solvers? [87.70522322728581]
We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting.
We create MACGYVER, an automatically generated dataset consisting of over 1,600 real-world problems.
We present our collection to both LLMs and humans to compare and contrast their problem-solving abilities.
arXiv Detail & Related papers (2023-11-16T08:52:27Z) - LatEval: An Interactive LLMs Evaluation Benchmark with Incomplete Information from Lateral Thinking Puzzles [22.119796373133298]
We propose a novel evaluation benchmark, LatEval, which assesses the model's lateral thinking within an interactive framework.
In our benchmark, we challenge LLMs with 2 aspects: the quality of questions posed by the model and the model's capability to integrate information for problem-solving.
For example, even the most advanced model, GPT-4, exhibits the advantage to some extent, yet still maintain a noticeable gap when compared to human.
arXiv Detail & Related papers (2023-08-21T16:49:40Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.