Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning
- URL: http://arxiv.org/abs/2310.01061v2
- Date: Sat, 24 Feb 2024 03:03:12 GMT
- Title: Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning
- Authors: Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan
- Abstract summary: Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks.
They lack up-to-date knowledge and experience hallucinations during reasoning.
Knowledge graphs (KGs) offer a reliable source of knowledge for reasoning.
- Score: 104.92384929827776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have demonstrated impressive reasoning abilities
in complex tasks. However, they lack up-to-date knowledge and experience
hallucinations during reasoning, which can lead to incorrect reasoning
processes and diminish their performance and trustworthiness. Knowledge graphs
(KGs), which capture vast amounts of facts in a structured format, offer a
reliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM
reasoning methods only treat KGs as factual knowledge bases and overlook the
importance of their structural information for reasoning. In this paper, we
propose a novel method called reasoning on graphs (RoG) that synergizes LLMs
with KGs to enable faithful and interpretable reasoning. Specifically, we
present a planning-retrieval-reasoning framework, where RoG first generates
relation paths grounded by KGs as faithful plans. These plans are then used to
retrieve valid reasoning paths from the KGs for LLMs to conduct faithful
reasoning. Furthermore, RoG not only distills knowledge from KGs to improve the
reasoning ability of LLMs through training but also allows seamless integration
with any arbitrary LLMs during inference. Extensive experiments on two
benchmark KGQA datasets demonstrate that RoG achieves state-of-the-art
performance on KG reasoning tasks and generates faithful and interpretable
reasoning results.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - Causal Reasoning in Large Language Models: A Knowledge Graph Approach [6.5344638992876085]
Large language models (LLMs) typically improve performance by either retrieving semantically similar information, or enhancing reasoning abilities through structured prompts like chain-of-thought.
This paper proposes a knowledge graph (KG)-based random-walk reasoning approach that leverages causal relationships.
arXiv Detail & Related papers (2024-10-15T13:24:44Z) - Explore then Determine: A GNN-LLM Synergy Framework for Reasoning over Knowledge Graph [38.31983923708175]
This paper focuses on the Question Answering over Knowledge Graph (KGQA) task.
It proposes an Explore-then-Determine (EtD) framework that synergizes Large Language Models with graph neural networks (GNNs) for reasoning over KGs.
EtD achieves state-of-the-art performance and generates faithful reasoning results.
arXiv Detail & Related papers (2024-06-03T09:38:28Z) - Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs [52.42505579545893]
Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought explanations alongside answers.
We propose a novel discriminative and generative CoT evaluation paradigm to assess LLMs' knowledge of reasoning and the accuracy of the generated CoT.
arXiv Detail & Related papers (2024-02-17T05:22:56Z) - Mitigating Large Language Model Hallucinations via Autonomous Knowledge
Graph-based Retrofitting [51.7049140329611]
This paper proposes Knowledge Graph-based Retrofitting (KGR) to mitigate factual hallucination during the reasoning process.
Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks.
arXiv Detail & Related papers (2023-11-22T11:08:38Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.