Do I have the Knowledge to Answer? Investigating Answerability of
Knowledge Base Questions
- URL: http://arxiv.org/abs/2212.10189v2
- Date: Sat, 24 Jun 2023 11:06:55 GMT
- Title: Do I have the Knowledge to Answer? Investigating Answerability of
Knowledge Base Questions
- Authors: Mayur Patidar, Prayushi Faldu, Avinash Singh, Lovekesh Vig, Indrajit
Bhattacharya, Mausam
- Abstract summary: We create GrailQAbility, a new benchmark KBQA dataset with unanswerability.
Experimenting with three state-of-the-art KBQA models, we find that all three models suffer a drop in performance.
This underscores the need for further research in making KBQA systems robust to unanswerability.
- Score: 25.13991044303459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When answering natural language questions over knowledge bases, missing
facts, incomplete schema and limited scope naturally lead to many questions
being unanswerable. While answerability has been explored in other QA settings,
it has not been studied for QA over knowledge bases (KBQA). We create
GrailQAbility, a new benchmark KBQA dataset with unanswerability, by first
identifying various forms of KB incompleteness that make questions
unanswerable, and then systematically adapting GrailQA (a popular KBQA dataset
with only answerable questions). Experimenting with three state-of-the-art KBQA
models, we find that all three models suffer a drop in performance even after
suitable adaptation for unanswerable questions. In addition, these often detect
unanswerability for wrong reasons and find specific forms of unanswerability
particularly difficult to handle. This underscores the need for further
research in making KBQA systems robust to unanswerability
Related papers
- Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA [19.6585442152102]
We study the Knowledge-Based visual question-answering problem, for which given a question, the models need to ground it into the visual modality to find the answer.
Our study shows that replacing a complex question with several simpler questions helps to extract more relevant information from the image.
arXiv Detail & Related papers (2024-06-27T02:19:38Z) - RetinaQA: A Robust Knowledge Base Question Answering Model for both Answerable and Unanswerable Questions [23.73807255464977]
State-of-the-art Knowledge Base Question Answering (KBQA) models assume all questions to be answerable.
We propose RetinaQA, a new model that unifies two key ideas in a single KBQA architecture.
We show that RetinaQA significantly outperforms adaptations of state-of-the-art KBQA models in handling both answerable and unanswerable questions.
arXiv Detail & Related papers (2024-03-16T08:08:20Z) - ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models [19.85526116658481]
We introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework.
Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets.
This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs for interpretable and knowledge-required question answering.
arXiv Detail & Related papers (2023-10-13T09:45:14Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases [63.43418760818188]
We release a new large-scale, high-quality dataset with 64,331 questions, GrailQA.
We propose a novel BERT-based KBQA model.
The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA.
arXiv Detail & Related papers (2020-11-16T06:36:26Z) - Summary-Oriented Question Generation for Informational Queries [23.72999724312676]
We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate.
Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4).
We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles -- with further confirmation via a human evaluation.
arXiv Detail & Related papers (2020-10-19T17:30:08Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - KQA Pro: A Dataset with Explicit Compositional Programs for Complex
Question Answering over Knowledge Base [67.87878113432723]
We introduce KQA Pro, a dataset for Complex KBQA including 120K diverse natural language questions.
For each question, we provide the corresponding KoPL program and SPARQL query, so that KQA Pro serves for both KBQA and semantic parsing tasks.
arXiv Detail & Related papers (2020-07-08T03:28:04Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.