Improving Time Sensitivity for Question Answering over Temporal
Knowledge Graphs
- URL: http://arxiv.org/abs/2203.00255v1
- Date: Tue, 1 Mar 2022 06:21:14 GMT
- Title: Improving Time Sensitivity for Question Answering over Temporal
Knowledge Graphs
- Authors: Chao Shang, Guangtao Wang, Peng Qi, Jing Huang
- Abstract summary: We propose a time-sensitive question answering (TSQA) framework to tackle these problems.
TSQA features a timestamp estimation module to infer the unwritten timestamp from the question.
We also employ a time-sensitive KG encoder to inject ordering information into the temporal KG embeddings that TSQA is based on.
- Score: 13.906994055281826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question answering over temporal knowledge graphs (KGs) efficiently uses
facts contained in a temporal KG, which records entity relations and when they
occur in time, to answer natural language questions (e.g., "Who was the
president of the US before Obama?"). These questions often involve three
time-related challenges that previous work fail to adequately address: 1)
questions often do not specify exact timestamps of interest (e.g., "Obama"
instead of 2000); 2) subtle lexical differences in time relations (e.g.,
"before" vs "after"); 3) off-the-shelf temporal KG embeddings that previous
work builds on ignore the temporal order of timestamps, which is crucial for
answering temporal-order related questions. In this paper, we propose a
time-sensitive question answering (TSQA) framework to tackle these problems.
TSQA features a timestamp estimation module to infer the unwritten timestamp
from the question. We also employ a time-sensitive KG encoder to inject
ordering information into the temporal KG embeddings that TSQA is based on.
With the help of techniques to reduce the search space for potential answers,
TSQA significantly outperforms the previous state of the art on a new benchmark
for question answering over temporal KGs, especially achieving a 32% (absolute)
error reduction on complex questions that require multiple steps of reasoning
over facts in the temporal KG.
Related papers
- PAT-Questions: A Self-Updating Benchmark for Present-Anchored Temporal Question-Answering [6.109188517569139]
We introduce the PAT-Questions benchmark, which includes single and multi-hop temporal questions.
The answers in PAT-Questions can be automatically refreshed by re-running SPARQL queries on a knowledge graph, if available.
We evaluate several state-of-the-art LLMs and a SOTA temporal reasoning model (TEMPREASON-T5) on PAT-Questions through direct prompting and retrieval-augmented generation (RAG)
arXiv Detail & Related papers (2024-02-16T19:26:09Z) - Joint Multi-Facts Reasoning Network For Complex Temporal Question
Answering Over Knowledge Graph [34.44840297353777]
Temporal Knowledge Graph (TKG) is an extension of regular knowledge graph by attaching the time scope.
We propose textbfunderlineJoint textbfunderlineMulti textbfunderlineFacts textbfunderlineReasoning textbfunderlineNetwork (JMFRN)
arXiv Detail & Related papers (2024-01-04T11:34:39Z) - Frame-Subtitle Self-Supervision for Multi-Modal Video Question Answering [73.11017833431313]
Multi-modal video question answering aims to predict correct answer and localize the temporal boundary relevant to the question.
We devise a weakly supervised question grounding (WSQG) setting, where only QA annotations are used.
We transform the correspondence between frames and subtitles to Frame-Subtitle (FS) self-supervision, which helps to optimize the temporal attention scores.
arXiv Detail & Related papers (2022-09-08T07:20:51Z) - ForecastTKGQuestions: A Benchmark for Temporal Question Answering and
Forecasting over Temporal Knowledge Graphs [28.434829347176233]
Question answering over temporal knowledge graphs (TKGQA) has recently found increasing interest.
TKGQA requires temporal reasoning techniques to extract the relevant information from temporal knowledge bases.
We propose a novel task: forecasting question answering over temporal knowledge graphs.
arXiv Detail & Related papers (2022-08-12T21:02:35Z) - RealTime QA: What's the Answer Right Now? [137.04039209995932]
We introduce REALTIME QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis.
We build strong baseline models upon large pretrained language models, including GPT-3 and T5.
GPT-3 tends to return outdated answers when retrieved documents do not provide sufficient information to find an answer.
arXiv Detail & Related papers (2022-07-27T07:26:01Z) - TempoQR: Temporal Question Reasoning over Knowledge Graphs [11.054877399064804]
This paper puts forth a comprehensive embedding-based framework for answering complex questions over Knowledge Graphs.
Our method termed temporal question reasoning (TempoQR) exploits TKG embeddings to ground the question to the specific entities and time scope it refers to.
Experiments show that TempoQR improves accuracy by 25--45 percentage points on complex temporal questions over state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-10T23:59:14Z) - Relation-Guided Pre-Training for Open-Domain Question Answering [67.86958978322188]
We propose a Relation-Guided Pre-Training (RGPT-QA) framework to solve complex open-domain questions.
We show that RGPT-QA achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
arXiv Detail & Related papers (2021-09-21T17:59:31Z) - Complex Temporal Question Answering on Knowledge Graphs [22.996399822102575]
This work presents EXAQT, the first end-to-end system for answering complex temporal questions.
It answers natural language questions over knowledge graphs (KGs) in two stages, one geared towards high recall, the other towards precision at top ranks.
We evaluate EXAQT on TimeQuestions, a large dataset of 16k temporal questions compiled from a variety of general purpose KG-QA benchmarks.
arXiv Detail & Related papers (2021-09-18T13:41:43Z) - A Dataset for Answering Time-Sensitive Questions [88.95075983560331]
Time is an important dimension in our physical world. Lots of facts can evolve with respect to time.
It is important to consider the time dimension and empower the existing QA models to reason over time.
The existing QA datasets contain rather few time-sensitive questions, hence not suitable for diagnosing or benchmarking the model's temporal reasoning capability.
arXiv Detail & Related papers (2021-08-13T16:42:25Z) - NExT-QA:Next Phase of Question-Answering to Explaining Temporal Actions [80.60423934589515]
We introduce NExT-QA, a rigorously designed video question answering (VideoQA) benchmark.
We set up multi-choice and open-ended QA tasks targeting causal action reasoning, temporal action reasoning, and common scene comprehension.
We find that top-performing methods excel at shallow scene descriptions but are weak in causal and temporal action reasoning.
arXiv Detail & Related papers (2021-05-18T04:56:46Z) - TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions [91.85730323228833]
We introduce TORQUE, a new English reading comprehension benchmark built on 3.2k news with 21k human-generated questions querying temporal relationships.
Results show that RoBERTa-large snippets achieves an exact-match score of 51% on the test set of TORQUE, about 30% behind human performance.
arXiv Detail & Related papers (2020-05-01T06:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.