Right for Right Reasons: Large Language Models for Verifiable
Commonsense Knowledge Graph Question Answering
- URL: http://arxiv.org/abs/2403.01390v1
- Date: Sun, 3 Mar 2024 04:22:13 GMT
- Title: Right for Right Reasons: Large Language Models for Verifiable
Commonsense Knowledge Graph Question Answering
- Authors: Armin Toroghi, Willis Guo, Mohammad Mahdi Abdollah Pour, Scott Sanner
- Abstract summary: Knowledge Graph Question Answering (KGQA) methods seek to answer Natural Language questions using the relational information stored in Knowledge Graphs (KGs)
With the recent advancements of Large Language Models (LLMs) and their remarkable reasoning abilities, there is a growing trend to leverage them for KGQA.
We propose Right for Right Reasons (R3), a commonsense KGQA methodology that allows for a verifiable reasoning procedure.
- Score: 20.1946576623729
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge Graph Question Answering (KGQA) methods seek to answer Natural
Language questions using the relational information stored in Knowledge Graphs
(KGs). With the recent advancements of Large Language Models (LLMs) and their
remarkable reasoning abilities, there is a growing trend to leverage them for
KGQA. However, existing methodologies have only focused on answering factual
questions, e.g., "In which city was Silvio Berlusconi's first wife born?",
leaving questions involving commonsense reasoning that real-world users may
pose more often, e.g., "Do I need separate visas to see the Venus of Willendorf
and attend the Olympics this summer?" unaddressed. In this work, we first
observe that existing LLM-based methods for KGQA struggle with hallucination on
such questions, especially on queries targeting long-tail entities (e.g.,
non-mainstream and recent entities), thus hindering their applicability in
real-world applications especially since their reasoning processes are not
easily verifiable. In response, we propose Right for Right Reasons (R3), a
commonsense KGQA methodology that allows for a verifiable reasoning procedure
by axiomatically surfacing intrinsic commonsense knowledge of LLMs and
grounding every factual reasoning step on KG triples. Through experimental
evaluations across three different tasks--question answering, claim
verification, and preference matching--our findings showcase R3 as a superior
approach, outperforming existing methodologies and notably reducing instances
of hallucination and reasoning errors.
Related papers
- LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering [1.5238808518078564]
LinkQ is a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering.
Our results indicate that practitioners find LinkQ effective for KG question-answering.
arXiv Detail & Related papers (2024-06-07T15:28:31Z) - FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering [46.41364317172677]
We propose a retrieval-exploration interactive method, FiDelis, to handle intermediate steps of reasoning grounded by external knowledge graphs.
We incorporate the logic and common-sense reasoning of LLMs into the knowledge retrieval process, which provides more accurate recalling performance.
arXiv Detail & Related papers (2024-05-22T17:56:53Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [90.30473970040362]
We propose a training-free method called Generate-on-Graph (GoG) that can generate new factual triples while exploring on Knowledge Graphs (KGs)
Specifically, we propose a selecting-generating-answering framework, which not only treat the LLM as an agent to explore on KGs, but also treat it as a KG to generate new facts based on the explored subgraph.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Logic Query of Thoughts: Guiding Large Language Models to Answer Complex Logic Queries with Knowledge Graphs [102.37496443389203]
'Logic-Query-of-Thoughts' (LGOT) is the first of its kind to combine knowledge graph reasoning and large language models.
Our experimental findings demonstrate substantial performance enhancements, with up to 20% improvement over ChatGPT.
arXiv Detail & Related papers (2024-03-17T17:01:45Z) - Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning [104.92384929827776]
Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks.
They lack up-to-date knowledge and experience hallucinations during reasoning.
Knowledge graphs (KGs) offer a reliable source of knowledge for reasoning.
arXiv Detail & Related papers (2023-10-02T10:14:43Z) - Graph Reasoning for Question Answering with Triplet Retrieval [33.454090126152714]
We propose a simple yet effective method to retrieve the most relevant triplets from knowledge graphs (KGs)
Our method can outperform state-of-the-art up to 4.6% absolute accuracy.
arXiv Detail & Related papers (2023-05-30T04:46:28Z) - Multi-hop Commonsense Knowledge Injection Framework for Zero-Shot
Commonsense Question Answering [6.086719709100659]
We propose a novel multi-hop commonsense knowledge injection framework.
Our framework achieves state-of-art performance on five commonsense question answering benchmarks.
arXiv Detail & Related papers (2023-05-10T07:13:47Z) - WikiWhy: Answering and Explaining Cause-and-Effect Questions [62.60993594814305]
We introduce WikiWhy, a QA dataset built around explaining why an answer is true in natural language.
WikiWhy contains over 9,000 "why" question-answer-rationale triples, grounded on Wikipedia facts across a diverse set of topics.
GPT-3 baselines achieve only 38.7% human-evaluated correctness in the end-to-end answer & explain condition.
arXiv Detail & Related papers (2022-10-21T17:59:03Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.