Toward Better Temporal Structures for Geopolitical Events Forecasting
- URL: http://arxiv.org/abs/2601.00430v1
- Date: Thu, 01 Jan 2026 18:45:07 GMT
- Title: Toward Better Temporal Structures for Geopolitical Events Forecasting
- Authors: Kian Ahrabian, Eric Boxer, Jay Pujara,
- Abstract summary: We study a generalization of HTKGs, Hyper-Relational Temporal Knowledge Generalized Hypergraphs (HTKGHs)<n>We first derive a formalization for HTKGHs, demonstrating their backward compatibility while supporting two complex types of facts commonly found in geopolitical incidents.<n>We then introduce the htkgh-polecat dataset, built upon the global event database POLECAT.
- Score: 13.690434458053765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forecasting on geopolitical temporal knowledge graphs (TKGs) through the lens of large language models (LLMs) has recently gained traction. While TKGs and their generalization, hyper-relational temporal knowledge graphs (HTKGs), offer a straightforward structure to represent simple temporal relationships, they lack the expressive power to convey complex facts efficiently. One of the critical limitations of HTKGs is a lack of support for more than two primary entities in temporal facts, which commonly occur in real-world events. To address this limitation, in this work, we study a generalization of HTKGs, Hyper-Relational Temporal Knowledge Generalized Hypergraphs (HTKGHs). We first derive a formalization for HTKGHs, demonstrating their backward compatibility while supporting two complex types of facts commonly found in geopolitical incidents. Then, utilizing this formalization, we introduce the htkgh-polecat dataset, built upon the global event database POLECAT. Finally, we benchmark and analyze popular LLMs on the relation prediction task, providing insights into their adaptability and capabilities in complex forecasting scenarios.
Related papers
- TKG-Thinker: Towards Dynamic Reasoning over Temporal Knowledge Graphs via Agentic Reinforcement Learning [22.089705008812217]
Temporal knowledge graph question answering (TKGQA) aims to answer time-sensitive questions by leveraging temporal knowledge bases.<n>Current prompting strategies constrain their efficacy in two primary ways.<n>We propose textbfTKG-Thinker, a novel agent equipped with autonomous planning and adaptive retrieval capabilities.
arXiv Detail & Related papers (2026-02-05T16:08:36Z) - Not in Sync: Unveiling Temporal Bias in Audio Chat Models [59.146710538620816]
Large Audio Language Models (LALMs) are increasingly applied to audio understanding and multimodal reasoning.<n>We present the first systematic study of temporal bias in LALMs, revealing a key limitation in their timestamp prediction.
arXiv Detail & Related papers (2025-10-14T06:29:40Z) - Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation [69.45495166424642]
We develop a robust and discriminative QA benchmark to measure temporal, causal, and character consistency understanding in narrative documents.<n>We then introduce Entity-Event RAG (E2RAG), a dual-graph framework that keeps separate entity and event subgraphs linked by a bipartite mapping.<n>Across ChronoQA, our approach outperforms state-of-the-art unstructured and KG-based RAG baselines, with notable gains on causal and character consistency queries.
arXiv Detail & Related papers (2025-06-06T10:07:21Z) - G2S: A General-to-Specific Learning Framework for Temporal Knowledge Graph Forecasting with Large Language Models [109.00835325199705]
We propose a General-to-Specific learning framework (G2S) to disentangle the learning processes of the above two kinds of knowledge.<n>In the general learning stage, we mask the scenario information in different TKGs and convert it into anonymous temporal structures.<n>In the specific learning stage, we inject the scenario information into the structures via either in-context learning or fine-tuning modes.
arXiv Detail & Related papers (2025-05-31T07:57:19Z) - CognTKE: A Cognitive Temporal Knowledge Extrapolation Framework [28.9250547012577]
Motivated by the Dual Process Theory in cognitive science, we propose a textbfCognitive textbfTemporal textbfKnowledge textbfExtrapolation framework (CognTKE)<n>CognTKE introduces a novel temporal cognitive relation directed graph (TCR-Digraph) and performs interpretable global shallow reasoning and local deep reasoning over the TCR-Digraph.<n>The experimental results on four benchmark datasets demonstrate that CognTKE achieves significant improvement in accuracy compared to the state-of-the
arXiv Detail & Related papers (2024-12-21T09:50:55Z) - UniHR: Hierarchical Representation Learning for Unified Knowledge Graph Link Prediction [59.84402324458322]
Real-world knowledge graphs (KGs) contain not only standard triple-based facts, but also more complex, heterogeneous types of facts.<n>We propose UniHR, a learning framework that unifies hyper-relational KGs, temporal KGs, and nested factual KGs into triple-based representations.<n>Experiments on 9 datasets across 5 types of KGs demonstrate the effectiveness of UniHR and highlight the strong potential of unified representations.
arXiv Detail & Related papers (2024-11-11T14:22:42Z) - Learning Granularity Representation for Temporal Knowledge Graph Completion [2.689675451882683]
Temporal Knowledge Graphs (TKGs) incorporate temporal information to reflect the dynamic structural knowledge and evolutionary patterns of real-world facts.
This paper proposes textbfLearning textbfGranularity textbfRepresentation (termed $mathsfLGRe$) for TKG completion.
It comprises two main components: Granularity Learning (GRL) and Adaptive Granularity Balancing (AGB)
arXiv Detail & Related papers (2024-08-27T08:19:34Z) - Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning [87.10396098919013]
Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning.<n>We propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on Temporal Knowledge Graphs.<n>LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules.
arXiv Detail & Related papers (2024-05-23T04:54:37Z) - GenTKG: Generative Forecasting on Temporal Knowledge Graph with Large Language Models [35.594662986581746]
Large language models (LLMs) have ignited interest in the temporal knowledge graph (tKG) domain, where conventional embedding-based and rule-based methods dominate.
We propose a novel retrieval-augmented generation framework named GenTKG combining a temporal logical rule-based retrieval strategy and few-shot parameter-efficient instruction tuning.
Experiments have shown that GenTKG outperforms conventional methods of temporal relational forecasting with low computation resources.
arXiv Detail & Related papers (2023-10-11T18:27:12Z) - Self-Supervised Temporal Graph learning with Temporal and Structural Intensity Alignment [53.72873672076391]
Temporal graph learning aims to generate high-quality representations for graph-based tasks with dynamic information.
We propose a self-supervised method called S2T for temporal graph learning, which extracts both temporal and structural information.
S2T achieves at most 10.13% performance improvement compared with the state-of-the-art competitors on several datasets.
arXiv Detail & Related papers (2023-02-15T06:36:04Z) - T-GAP: Learning to Walk across Time for Temporal Knowledge Graph
Completion [13.209193437124881]
Temporal knowledge graphs (TKGs) inherently reflect the transient nature of real-world knowledge, as opposed to static knowledge graphs.
We propose T-GAP, a novel model for TKG completion that maximally utilizes both temporal information and graph structure in its encoder and decoder.
Our experiments demonstrate that T-GAP achieves superior performance against state-of-the-art baselines, and competently generalizes to queries with unseen timestamps.
arXiv Detail & Related papers (2020-12-19T04:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.