Joint Multi-Facts Reasoning Network For Complex Temporal Question
Answering Over Knowledge Graph
- URL: http://arxiv.org/abs/2401.02212v1
- Date: Thu, 4 Jan 2024 11:34:39 GMT
- Title: Joint Multi-Facts Reasoning Network For Complex Temporal Question
Answering Over Knowledge Graph
- Authors: Rikui Huang, Wei Wei, Xiaoye Qu, Wenfeng Xie, Xianling Mao, Dangyang
Chen
- Abstract summary: Temporal Knowledge Graph (TKG) is an extension of regular knowledge graph by attaching the time scope.
We propose textbfunderlineJoint textbfunderlineMulti textbfunderlineFacts textbfunderlineReasoning textbfunderlineNetwork (JMFRN)
- Score: 34.44840297353777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Knowledge Graph (TKG) is an extension of regular knowledge graph by
attaching the time scope. Existing temporal knowledge graph question answering
(TKGQA) models solely approach simple questions, owing to the prior assumption
that each question only contains a single temporal fact with explicit/implicit
temporal constraints. Hence, they perform poorly on questions which own
multiple temporal facts. In this paper, we propose \textbf{\underline{J}}oint
\textbf{\underline{M}}ulti \textbf{\underline{F}}acts
\textbf{\underline{R}}easoning \textbf{\underline{N}}etwork (JMFRN), to jointly
reasoning multiple temporal facts for accurately answering \emph{complex}
temporal questions. Specifically, JMFRN first retrieves question-related
temporal facts from TKG for each entity of the given complex question. For
joint reasoning, we design two different attention (\ie entity-aware and
time-aware) modules, which are suitable for universal settings, to aggregate
entities and timestamps information of retrieved facts. Moreover, to filter
incorrect type answers, we introduce an additional answer type discrimination
task. Extensive experiments demonstrate our proposed method significantly
outperforms the state-of-art on the well-known complex temporal question
benchmark TimeQuestions.
Related papers
- ComplexTempQA: A Large-Scale Dataset for Complex Temporal Question Answering [24.046966640011124]
ComplexTempQA is a large-scale dataset consisting of over 100 million question-answer pairs.
The dataset covers questions spanning over two decades and offers an unmatched breadth of topics.
arXiv Detail & Related papers (2024-06-07T12:01:59Z) - Self-Improvement Programming for Temporal Knowledge Graph Question Answering [31.33908040172437]
Temporal Knowledge Graph Question Answering (TKGQA) aims to answer questions with temporal intent over Temporal Knowledge Graphs (TKGs)
Existing end-to-end methods implicitly model the time constraints by learning time-aware embeddings of questions and candidate answers.
We introduce a novel self-improvement Programming method for TKGQA (Prog-TQA)
arXiv Detail & Related papers (2024-04-02T08:14:27Z) - Question Answering in Natural Language: the Special Case of Temporal
Expressions [0.0]
Our work aims to leverage a popular approach used for general question answering, answer extraction, in order to find answers to temporal questions within a paragraph.
To train our model, we propose a new dataset, inspired by SQuAD, specifically tailored to provide rich temporal information.
Our evaluation shows that a deep learning model trained to perform pattern matching, often used in general question answering, can be adapted to temporal question answering.
arXiv Detail & Related papers (2023-11-23T16:26:24Z) - Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning [73.51314109184197]
It is crucial for large language models (LLMs) to understand the concept of temporal knowledge.
We propose a complex temporal question-answering dataset Complex-TR that focuses on multi-answer and multi-hop temporal reasoning.
arXiv Detail & Related papers (2023-11-16T11:49:29Z) - Once Upon a $\textit{Time}$ in $\textit{Graph}$: Relative-Time
Pretraining for Complex Temporal Reasoning [96.03608822291136]
We make use of the underlying nature of time, and suggest creating a graph structure based on the relative placements of events along the time axis.
Inspired by the graph view, we propose RemeMo, which explicitly connects all temporally-scoped facts by modeling the time relations between any two sentences.
Experimental results show that RemeMo outperforms the baseline T5 on multiple temporal question answering datasets.
arXiv Detail & Related papers (2023-10-23T08:49:00Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z) - HiSMatch: Historical Structure Matching based Temporal Knowledge Graph
Reasoning [59.38797474903334]
This paper proposes the textbfHistorical textbfStructure textbfMatching (textbfHiSMatch) model.
It applies two structure encoders to capture the semantic information contained in the historical structures of the query and candidate entities.
Experiments on six benchmark datasets demonstrate the significant improvement of the proposed HiSMatch model, with up to 5.6% performance improvement in MRR, compared to the state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-18T09:39:26Z) - TempoQR: Temporal Question Reasoning over Knowledge Graphs [11.054877399064804]
This paper puts forth a comprehensive embedding-based framework for answering complex questions over Knowledge Graphs.
Our method termed temporal question reasoning (TempoQR) exploits TKG embeddings to ground the question to the specific entities and time scope it refers to.
Experiments show that TempoQR improves accuracy by 25--45 percentage points on complex temporal questions over state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-10T23:59:14Z) - Complex Temporal Question Answering on Knowledge Graphs [22.996399822102575]
This work presents EXAQT, the first end-to-end system for answering complex temporal questions.
It answers natural language questions over knowledge graphs (KGs) in two stages, one geared towards high recall, the other towards precision at top ranks.
We evaluate EXAQT on TimeQuestions, a large dataset of 16k temporal questions compiled from a variety of general purpose KG-QA benchmarks.
arXiv Detail & Related papers (2021-09-18T13:41:43Z) - Semantic Graphs for Generating Deep Questions [98.5161888878238]
We propose a novel framework which first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN)
On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance.
arXiv Detail & Related papers (2020-04-27T10:52:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.