CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring
Commonsense Reasoning and Long-Tail Knowledge
- URL: http://arxiv.org/abs/2403.01395v1
- Date: Sun, 3 Mar 2024 04:47:01 GMT
- Title: CR-LT-KGQA: A Knowledge Graph Question Answering Dataset Requiring
Commonsense Reasoning and Long-Tail Knowledge
- Authors: Willis Guo, Armin Toroghi, Scott Sanner
- Abstract summary: We create a novel Commonsense Reasoning (CR) and Long-Tail (LT) KGQA dataset with two subtasks -- question answering and claim verification.
While existing KGQA methods are not applicable due to their lack of commonsense inference support, baseline evaluation of LLMs on CR-LT KGQA demonstrate a high rate of hallucination.
- Score: 21.73770363188049
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge graph question answering (KGQA) is a well-established field that
seeks to provide factual answers to natural language (NL) questions by
leveraging knowledge graphs (KGs). However, existing KGQA datasets suffer from
two significant limitations: (1) no existing KGQA dataset requires commonsense
reasoning to arrive at an answer and (2) existing KGQA datasets focus on
popular entities for which large language models (LLMs) can directly answer
without hallucinating and without leveraging the KG. In this work, we seek a
novel KGQA dataset that supports commonsense reasoning and focuses on long-tail
entities (e.g., non-mainstream and recent entities) where LLMs frequently
hallucinate, and thus create the need for novel methodologies that leverage the
KG for factual and attributable commonsense inference. We create a novel
Commonsense Reasoning (CR) and Long-Tail (LT) KGQA dataset with two subtasks --
question answering and claim verification -- that address both limitations (1)
and (2). We construct CR-LT-KGQA by building extensions to existing reasoning
datasets StrategyQA and CREAK over Wikidata. While existing KGQA methods are
not applicable due to their lack of commonsense inference support, baseline
evaluation of LLMs on CR-LT KGQA demonstrate a high rate of hallucination.
Thus, CR-LT KGQA poses significant challenges for hallucination-prone LLMs,
hence paving the way for future commonsense KGQA research to provide accurate
and factual answers for long-tail entities in the era of LLMs.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering [1.5238808518078564]
LinkQ is a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering.
Our results indicate that practitioners find LinkQ effective for KG question-answering.
arXiv Detail & Related papers (2024-06-07T15:28:31Z) - Retrieval-Augmented Language Model for Extreme Multi-Label Knowledge Graph Link Prediction [2.6749568255705656]
Extrapolation in large language models (LLMs) for open-ended inquiry encounters two pivotal issues.
Existing works attempt to tackle the problem by augmenting the input of a smaller language model with information from a knowledge graph.
We propose a new task, the extreme multi-label KG link prediction task, to enable a model to perform extrapolation with multiple responses.
arXiv Detail & Related papers (2024-05-21T10:10:56Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Automatic Question-Answer Generation for Long-Tail Knowledge [65.11554185687258]
We propose an automatic approach to generate specialized QA datasets for tail entities.
We conduct extensive experiments by employing pretrained LLMs on our newly generated long-tail QA datasets.
arXiv Detail & Related papers (2024-03-03T03:06:31Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Improving Embedded Knowledge Graph Multi-hop Question Answering by
introducing Relational Chain Reasoning [8.05076085499457]
Knowledge Base Question Answer (KBQA) to answer userquestions from a knowledge base (KB) by identifying reasoning between topic entity and answer.
As a complex branchtask of KBQA, multi-hop KGQA requires reasoning over multi-hop relational chains preserved in structured KG.
arXiv Detail & Related papers (2021-10-25T06:53:02Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.