TimelineKGQA: A Comprehensive Question-Answer Pair Generator for Temporal Knowledge Graphs
- URL: http://arxiv.org/abs/2501.04343v1
- Date: Wed, 08 Jan 2025 08:30:44 GMT
- Title: TimelineKGQA: A Comprehensive Question-Answer Pair Generator for Temporal Knowledge Graphs
- Authors: Qiang Sun, Sirui Li, Du Huynh, Mark Reynolds, Wei Liu,
- Abstract summary: We propose a novel categorization framework based on timeline-context relationships.<n>textbfTimelineKGQA is a universal temporal QA generator applicable to any TKGs.
- Score: 11.496509633886161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question answering over temporal knowledge graphs (TKGs) is crucial for understanding evolving facts and relationships, yet its development is hindered by limited datasets and difficulties in generating custom QA pairs. We propose a novel categorization framework based on timeline-context relationships, along with \textbf{TimelineKGQA}, a universal temporal QA generator applicable to any TKGs. The code is available at: \url{https://github.com/PascalSun/TimelineKGQA} as an open source Python package.
Related papers
- The benefits of query-based KGQA systems for complex and temporal questions in LLM era [55.20230501807337]
Large language models excel in question-answering (QA) yet still struggle with multi-hop reasoning and temporal questions.<n> Query-based knowledge graph QA (KGQA) offers a modular alternative by generating executable queries instead of direct answers.<n>We explore multi-stage query-based framework for WikiData QA, proposing multi-stage approach that enhances performance on challenging multi-hop and temporal benchmarks.
arXiv Detail & Related papers (2025-07-16T06:41:03Z) - Evaluating List Construction and Temporal Understanding capabilities of Large Language Models [54.39278049092508]
Large Language Models (LLMs) are susceptible to hallucinations and errors on particularly temporal understanding tasks.<n>We propose the Time referenced List based Question Answering (TLQA) benchmark that requires structured answers in list format aligned with corresponding time periods.<n>We investigate the temporal understanding and list construction capabilities of state-of-the-art generative models on TLQA in closed-book and open-domain settings.
arXiv Detail & Related papers (2025-06-26T21:40:58Z) - Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation [69.45495166424642]
We develop a robust and discriminative QA benchmark to measure temporal, causal, and character consistency understanding in narrative documents.<n>We then introduce Entity-Event RAG (E2RAG), a dual-graph framework that keeps separate entity and event subgraphs linked by a bipartite mapping.<n>Across ChronoQA, our approach outperforms state-of-the-art unstructured and KG-based RAG baselines, with notable gains on causal and character consistency queries.
arXiv Detail & Related papers (2025-06-06T10:07:21Z) - TimeLogic: A Temporal Logic Benchmark for Video QA [64.32208175236323]
We introduce the TimeLogic QA (TLQA) framework to automatically generate temporal logical questions.
We leverage 4 datasets, STAR, Breakfast, AGQA, and CrossTask, and generate 2k and 10k QA pairs for each category.
We assess the VideoQA model's temporal reasoning performance on 16 categories of temporal logic with varying temporal complexity.
arXiv Detail & Related papers (2025-01-13T11:12:59Z) - Self-Improvement Programming for Temporal Knowledge Graph Question Answering [31.33908040172437]
Temporal Knowledge Graph Question Answering (TKGQA) aims to answer questions with temporal intent over Temporal Knowledge Graphs (TKGs)
Existing end-to-end methods implicitly model the time constraints by learning time-aware embeddings of questions and candidate answers.
We introduce a novel self-improvement Programming method for TKGQA (Prog-TQA)
arXiv Detail & Related papers (2024-04-02T08:14:27Z) - Automatic Question-Answer Generation for Long-Tail Knowledge [65.11554185687258]
We propose an automatic approach to generate specialized QA datasets for tail entities.
We conduct extensive experiments by employing pretrained LLMs on our newly generated long-tail QA datasets.
arXiv Detail & Related papers (2024-03-03T03:06:31Z) - Joint Multi-Facts Reasoning Network For Complex Temporal Question
Answering Over Knowledge Graph [34.44840297353777]
Temporal Knowledge Graph (TKG) is an extension of regular knowledge graph by attaching the time scope.
We propose textbfunderlineJoint textbfunderlineMulti textbfunderlineFacts textbfunderlineReasoning textbfunderlineNetwork (JMFRN)
arXiv Detail & Related papers (2024-01-04T11:34:39Z) - Once Upon a $\textit{Time}$ in $\textit{Graph}$: Relative-Time
Pretraining for Complex Temporal Reasoning [96.03608822291136]
We make use of the underlying nature of time, and suggest creating a graph structure based on the relative placements of events along the time axis.
Inspired by the graph view, we propose RemeMo, which explicitly connects all temporally-scoped facts by modeling the time relations between any two sentences.
Experimental results show that RemeMo outperforms the baseline T5 on multiple temporal question answering datasets.
arXiv Detail & Related papers (2023-10-23T08:49:00Z) - TwiRGCN: Temporally Weighted Graph Convolution for Question Answering
over Temporal Knowledge Graphs [35.50055476282997]
We show how to generalize relational graph convolutional networks (RGCN) for temporal question answering (QA)
We propose a novel, intuitive and interpretable scheme to modulate the messages passed through a KG edge during convolution.
We evaluate the resulting system, which we call TwiRGCN, on TimeQuestions, a recently released, challenging dataset for complex temporal QA.
arXiv Detail & Related papers (2022-10-12T15:03:49Z) - ForecastTKGQuestions: A Benchmark for Temporal Question Answering and
Forecasting over Temporal Knowledge Graphs [28.434829347176233]
Question answering over temporal knowledge graphs (TKGQA) has recently found increasing interest.
TKGQA requires temporal reasoning techniques to extract the relevant information from temporal knowledge bases.
We propose a novel task: forecasting question answering over temporal knowledge graphs.
arXiv Detail & Related papers (2022-08-12T21:02:35Z) - TempoQR: Temporal Question Reasoning over Knowledge Graphs [11.054877399064804]
This paper puts forth a comprehensive embedding-based framework for answering complex questions over Knowledge Graphs.
Our method termed temporal question reasoning (TempoQR) exploits TKG embeddings to ground the question to the specific entities and time scope it refers to.
Experiments show that TempoQR improves accuracy by 25--45 percentage points on complex temporal questions over state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-10T23:59:14Z) - Relation-Guided Pre-Training for Open-Domain Question Answering [67.86958978322188]
We propose a Relation-Guided Pre-Training (RGPT-QA) framework to solve complex open-domain questions.
We show that RGPT-QA achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
arXiv Detail & Related papers (2021-09-21T17:59:31Z) - A Dataset for Answering Time-Sensitive Questions [88.95075983560331]
Time is an important dimension in our physical world. Lots of facts can evolve with respect to time.
It is important to consider the time dimension and empower the existing QA models to reason over time.
The existing QA datasets contain rather few time-sensitive questions, hence not suitable for diagnosing or benchmarking the model's temporal reasoning capability.
arXiv Detail & Related papers (2021-08-13T16:42:25Z) - Connecting the Dots: A Knowledgeable Path Generator for Commonsense
Question Answering [50.72473345911147]
This paper augments a general commonsense QA framework with a knowledgeable path generator.
By extrapolating over existing paths in a KG with a state-of-the-art language model, our generator learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path.
arXiv Detail & Related papers (2020-05-02T03:53:21Z) - Semantic Graphs for Generating Deep Questions [98.5161888878238]
We propose a novel framework which first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN)
On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance.
arXiv Detail & Related papers (2020-04-27T10:52:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.