Plan of Knowledge: Retrieval-Augmented Large Language Models for Temporal Knowledge Graph Question Answering
- URL: http://arxiv.org/abs/2511.04072v1
- Date: Thu, 06 Nov 2025 05:24:14 GMT
- Title: Plan of Knowledge: Retrieval-Augmented Large Language Models for Temporal Knowledge Graph Question Answering
- Authors: Xinying Qian, Ying Zhang, Yu Zhao, Baohang Zhou, Xuhui Sui, Xiaojie Yuan,
- Abstract summary: Temporal Knowledge Graph Question Answering (TKGQA) aims to answer time-sensitive questions by leveraging factual information from Temporal Knowledge Graphs (TKGs)
- Score: 23.330273675675897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal Knowledge Graph Question Answering (TKGQA) aims to answer time-sensitive questions by leveraging factual information from Temporal Knowledge Graphs (TKGs). While previous studies have employed pre-trained TKG embeddings or graph neural networks to inject temporal knowledge, they fail to fully understand the complex semantic information of time constraints. Recently, Large Language Models (LLMs) have shown remarkable progress, benefiting from their strong semantic understanding and reasoning generalization capabilities. However, their temporal reasoning ability remains limited. LLMs frequently suffer from hallucination and a lack of knowledge. To address these limitations, we propose the Plan of Knowledge framework with a contrastive temporal retriever, which is named PoK. Specifically, the proposed Plan of Knowledge module decomposes a complex temporal question into a sequence of sub-objectives from the pre-defined tools, serving as intermediate guidance for reasoning exploration. In parallel, we construct a Temporal Knowledge Store (TKS) with a contrastive retrieval framework, enabling the model to selectively retrieve semantically and temporally aligned facts from TKGs. By combining structured planning with temporal knowledge retrieval, PoK effectively enhances the interpretability and factual consistency of temporal reasoning. Extensive experiments on four benchmark TKGQA datasets demonstrate that PoK significantly improves the retrieval precision and reasoning accuracy of LLMs, surpassing the performance of the state-of-the-art TKGQA methods by 56.0% at most.
Related papers
- TKG-Thinker: Towards Dynamic Reasoning over Temporal Knowledge Graphs via Agentic Reinforcement Learning [22.089705008812217]
Temporal knowledge graph question answering (TKGQA) aims to answer time-sensitive questions by leveraging temporal knowledge bases.<n>Current prompting strategies constrain their efficacy in two primary ways.<n>We propose textbfTKG-Thinker, a novel agent equipped with autonomous planning and adaptive retrieval capabilities.
arXiv Detail & Related papers (2026-02-05T16:08:36Z) - Multi-hop Reasoning via Early Knowledge Alignment [68.28168992785896]
Early Knowledge Alignment (EKA) aims to align Large Language Models with contextually relevant retrieved knowledge.<n>EKA significantly improves retrieval precision, reduces cascading errors, and enhances both performance and efficiency.<n>EKA proves effective as a versatile, training-free inference strategy that scales seamlessly to large models.
arXiv Detail & Related papers (2025-12-23T08:14:44Z) - Plan Then Retrieve: Reinforcement Learning-Guided Complex Reasoning over Knowledge Graphs [52.16166558205338]
Graph-RFT is a novel two-stage reinforcement fine-tuning KGQA framework with a 'plan-KGsearch-and-Websearch-during-think' paradigm.<n>It enables LLMs to perform autonomous planning and adaptive retrieval scheduling across KG and web sources under incomplete knowledge conditions.
arXiv Detail & Related papers (2025-10-23T16:04:13Z) - MemoTime: Memory-Augmented Temporal Knowledge Graph Enhanced Large Language Model Reasoning [22.89546852658161]
Temporal Knowledge Graphs offer a reliable source for temporal reasoning.<n>Existing TKG-based LLM reasoning methods still struggle with four major challenges.<n>We propose MemoTime, a memory-augmented temporal knowledge graph framework.
arXiv Detail & Related papers (2025-10-15T14:43:31Z) - It's High Time: A Survey of Temporal Question Answering [17.07150094603319]
Temporal Question Answering (TQA) focuses on answering questions involving temporal constraints or context.<n>Recent advances in TQA enabled by neural models and Large Language Models (LLMs)<n> benchmark datasets and evaluation strategies designed to test temporal robustness, recency awareness, and generalization.
arXiv Detail & Related papers (2025-05-26T17:21:26Z) - Mixture Policy based Multi-Hop Reasoning over N-tuple Temporal Knowledge Graphs [67.52353093086151]
We introduce a new Reinforcement Learning-based method, named MT-Path, which leverages the temporal information to traverse historical n-tuples and construct a temporal reasoning path.<n> Experimental results demonstrate the effectiveness and the explainability of MT-Path.
arXiv Detail & Related papers (2025-05-19T07:20:33Z) - Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering [23.98067169669452]
Time-Sensitive Question Answering (TSQA) demands the effective utilization of specific temporal contexts.
We propose a novel framework that enhances temporal awareness and reasoning through Temporal Information-Aware Embedding and Granular Contrastive Reinforcement Learning.
arXiv Detail & Related papers (2024-09-25T13:13:21Z) - Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning [87.10396098919013]
Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning.<n>We propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on Temporal Knowledge Graphs.<n>LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules.
arXiv Detail & Related papers (2024-05-23T04:54:37Z) - Two-stage Generative Question Answering on Temporal Knowledge Graph Using Large Language Models [24.417129499480975]
This paper first proposes a novel generative temporal knowledge graph question answering framework, GenTKGQA.
First, we exploit LLM's intrinsic knowledge to mine temporal constraints and structural links in the questions without extra training.
Next, we design virtual knowledge indicators to fuse the graph neural network signals of the subgraph and the text representations of the LLM in a non-shallow way.
arXiv Detail & Related papers (2024-02-26T13:47:09Z) - Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning [73.51314109184197]
It is crucial for large language models (LLMs) to understand the concept of temporal knowledge.
We propose a complex temporal question-answering dataset Complex-TR that focuses on multi-answer and multi-hop temporal reasoning.
arXiv Detail & Related papers (2023-11-16T11:49:29Z) - Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic [84.59255070520673]
Large language models (LLMs) face a challenge when engaging in temporal reasoning.
We propose TempLogic, a novel framework designed specifically for temporal question-answering tasks.
arXiv Detail & Related papers (2023-05-24T10:57:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.