Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions
- URL: http://arxiv.org/abs/2311.13982v1
- Date: Thu, 23 Nov 2023 12:52:37 GMT
- Title: Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions
- Authors: Shulin Cao, Jiajie Zhang, Jiaxin Shi, Xin Lv, Zijun Yao, Qi Tian,
Juanzi Li, Lei Hou
- Abstract summary: Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
- Score: 93.40614719648386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are capable of answering knowledge-intensive
complex questions with chain-of-thought (CoT) reasoning. However, they tend to
generate factually incorrect reasoning steps when the required knowledge is not
available or up-to-date in models' parameters. Recent works turn to retrieving
external knowledge to augment CoT reasoning. Despite being promising, these
chain-based methods suffer from: 1) Negative retrieval. Unnecessary or
incorrect retrieval may mislead the reasoning; 2) Limited sight. Lacking the
ability to look backward or forward, a local error in one step will propagate
along the chain.
In this paper, we propose a novel approach: Probabilistic Tree-of-thought
Reasoning (ProbTree). First, LLMs translate a complex question into a query
tree, in which each non-root node denotes a sub-question of its parent node.
Then, probabilistic reasoning is conducted over the tree, by solving questions
from leaf to root considering the confidence of both question decomposing and
answering. During reasoning, for leaf nodes, LLMs choose a more confident
answer from Closed-book QA that employs parametric knowledge and Open-book QA
that employs retrieved external knowledge, thus eliminating the negative
retrieval problem. For non-leaf nodes, with the hierarchical structure, LLMs
have broader sights and are able to globally reason with the information from
child nodes, thus recovering from local errors. The experiments on three
Complex QA datasets under the open-domain setting show that our approach
outperforms SOTA methods significantly, demonstrating the effect of
probabilistic tree-of-thought reasoning.
Related papers
- Tree of Reviews: A Tree-based Dynamic Iterative Retrieval Framework for Multi-hop Question Answering [0.18849131083278733]
We propose a dynamic retrieval framework called Tree of Reviews (ToR) for multi-hop question answering.
ToR achieves state-of-the-art performance in both retrieval and response generation.
arXiv Detail & Related papers (2024-04-22T09:25:05Z) - keqing: knowledge-based question answering is a nature chain-of-thought
mentor of LLM [27.76205400533089]
Large language models (LLMs) have exhibited remarkable performance on various natural language processing (NLP) tasks, especially for question answering.
We present a novel framework to assist LLMs, such as ChatGPT, to retrieve question-related structured information on the knowledge graph.
The experimental results on KBQA datasets show that Keqing can achieve competitive performance and illustrate the logic of answering each question.
arXiv Detail & Related papers (2023-12-31T08:39:04Z) - CLadder: Assessing Causal Reasoning in Language Models [82.8719238178569]
We investigate whether large language models (LLMs) can coherently reason about causality.
We propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al.
arXiv Detail & Related papers (2023-12-07T15:12:12Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Reasoning over Hierarchical Question Decomposition Tree for Explainable
Question Answering [83.74210749046551]
We propose to leverage question decomposing for heterogeneous knowledge integration.
We propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT)
Experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly.
arXiv Detail & Related papers (2023-05-24T11:45:59Z) - Search-in-the-Chain: Interactively Enhancing Large Language Models with
Search for Knowledge-intensive Tasks [121.74957524305283]
This paper proposes a novel framework named textbfSearch-in-the-Chain (SearChain) for the interaction between Information Retrieval (IR) and Large Language Model (LLM)
Experiments show that SearChain outperforms state-of-the-art baselines on complex knowledge-intensive tasks.
arXiv Detail & Related papers (2023-04-28T10:15:25Z) - RLET: A Reinforcement Learning Based Approach for Explainable QA with
Entailment Trees [47.745218107037786]
We propose RLET, a Reinforcement Learning based Entailment Tree generation framework.
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules.
Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
arXiv Detail & Related papers (2022-10-31T06:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.