Targeted Extraction of Temporal Facts from Textual Resources for
Improved Temporal Question Answering over Knowledge Bases
- URL: http://arxiv.org/abs/2203.11054v1
- Date: Mon, 21 Mar 2022 15:26:35 GMT
- Title: Targeted Extraction of Temporal Facts from Textual Resources for
Improved Temporal Question Answering over Knowledge Bases
- Authors: Nithish Kannen, Udit Sharma, Sumit Neelam, Dinesh Khandelwal, Shajith
Ikbal, Hima Karanam, L Venkata Subramaniam
- Abstract summary: Knowledge Base Question Answering (KBQA) systems have the goal of answering complex natural language questions by reasoning over relevant facts retrieved from Knowledge Bases (KB)
One of the major challenges faced by these systems is their inability to retrieve all relevant facts due to incomplete KB and entity/relation linking errors.
We propose a novel approach where a targeted temporal fact extraction technique is used to assist KBQA whenever it fails to retrieve temporal facts from the KB.
- Score: 21.108609901224572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Base Question Answering (KBQA) systems have the goal of answering
complex natural language questions by reasoning over relevant facts retrieved
from Knowledge Bases (KB). One of the major challenges faced by these systems
is their inability to retrieve all relevant facts due to factors such as
incomplete KB and entity/relation linking errors. In this paper, we address
this particular challenge for systems handling a specific category of questions
called temporal questions, where answer derivation involve reasoning over facts
asserting point/intervals of time for various events. We propose a novel
approach where a targeted temporal fact extraction technique is used to assist
KBQA whenever it fails to retrieve temporal facts from the KB. We use
$\lambda$-expressions of the questions to logically represent the component
facts and the reasoning steps needed to derive the answer. This allows us to
spot those facts that failed to get retrieved from the KB and generate textual
queries to extract them from the textual resources in an open-domain question
answering fashion. We evaluated our approach on a benchmark temporal question
answering dataset considering Wikidata and Wikipedia respectively as the KB and
textual resource. Experimental results show a significant $\sim$30\% relative
improvement in answer accuracy, demonstrating the effectiveness of our
approach.
Related papers
- Open Domain Question Answering with Conflicting Contexts [55.739842087655774]
We find that as much as 25% of unambiguous, open domain questions can lead to conflicting contexts when retrieved using Google Search.
We ask our annotators to provide explanations for their selections of correct answers.
arXiv Detail & Related papers (2024-10-16T07:24:28Z) - Question Answering in Natural Language: the Special Case of Temporal
Expressions [0.0]
Our work aims to leverage a popular approach used for general question answering, answer extraction, in order to find answers to temporal questions within a paragraph.
To train our model, we propose a new dataset, inspired by SQuAD, specifically tailored to provide rich temporal information.
Our evaluation shows that a deep learning model trained to perform pattern matching, often used in general question answering, can be adapted to temporal question answering.
arXiv Detail & Related papers (2023-11-23T16:26:24Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Do I have the Knowledge to Answer? Investigating Answerability of
Knowledge Base Questions [25.13991044303459]
We create GrailQAbility, a new benchmark KBQA dataset with unanswerability.
Experimenting with three state-of-the-art KBQA models, we find that all three models suffer a drop in performance.
This underscores the need for further research in making KBQA systems robust to unanswerability.
arXiv Detail & Related papers (2022-12-20T12:00:26Z) - DecAF: Joint Decoding of Answers and Logical Forms for Question
Answering over Knowledge Bases [81.19499764899359]
We propose a novel framework DecAF that jointly generates both logical forms and direct answers.
DecAF achieves new state-of-the-art accuracy on WebQSP, FreebaseQA, and GrailQA benchmarks.
arXiv Detail & Related papers (2022-09-30T19:51:52Z) - Asking the Right Questions in Low Resource Template Extraction [37.77304148934836]
We ask whether end users of TE systems can design these questions, and whether it is beneficial to involve an NLP practitioner in the process.
We propose a novel model to perform TE with prompts, and find it benefits from questions over other styles of prompts.
arXiv Detail & Related papers (2022-05-25T10:39:09Z) - A Benchmark for Generalizable and Interpretable Temporal Question
Answering over Knowledge Bases [67.33560134350427]
TempQA-WD is a benchmark dataset for temporal reasoning.
It is based on Wikidata, which is the most frequently curated, openly available knowledge base.
arXiv Detail & Related papers (2022-01-15T08:49:09Z) - TempoQR: Temporal Question Reasoning over Knowledge Graphs [11.054877399064804]
This paper puts forth a comprehensive embedding-based framework for answering complex questions over Knowledge Graphs.
Our method termed temporal question reasoning (TempoQR) exploits TKG embeddings to ground the question to the specific entities and time scope it refers to.
Experiments show that TempoQR improves accuracy by 25--45 percentage points on complex temporal questions over state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-10T23:59:14Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.