LLM-Guided Knowledge Distillation for Temporal Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2602.14428v1
- Date: Mon, 16 Feb 2026 03:27:50 GMT
- Title: LLM-Guided Knowledge Distillation for Temporal Knowledge Graph Reasoning
- Authors: Wang Xing, Wei Song, Siyu Lin, Chen Wu, Man Wang,
- Abstract summary: We propose an LLM-assisted distillation framework specifically designed for temporal knowledge graph reasoning.<n>The proposed approach consistently improves link prediction performance over strong distillation baselines.<n>The results highlight the potential of large language models as effective teachers for transferring temporal reasoning capability to resource-efficient TKG systems.
- Score: 8.96967435213864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal knowledge graphs (TKGs) support reasoning over time-evolving facts, yet state-of-the-art models are often computationally heavy and costly to deploy. Existing compression and distillation techniques are largely designed for static graphs; directly applying them to temporal settings may overlook time-dependent interactions and lead to performance degradation. We propose an LLM-assisted distillation framework specifically designed for temporal knowledge graph reasoning. Beyond a conventional high-capacity temporal teacher, we incorporate a large language model as an auxiliary instructor to provide enriched supervision. The LLM supplies broad background knowledge and temporally informed signals, enabling a lightweight student to better model event dynamics without increasing inference-time complexity. Training is conducted by jointly optimizing supervised and distillation objectives, using a staged alignment strategy to progressively integrate guidance from both teachers. Extensive experiments on multiple public TKG benchmarks with diverse backbone architectures demonstrate that the proposed approach consistently improves link prediction performance over strong distillation baselines, while maintaining a compact and efficient student model. The results highlight the potential of large language models as effective teachers for transferring temporal reasoning capability to resource-efficient TKG systems.
Related papers
- TKG-Thinker: Towards Dynamic Reasoning over Temporal Knowledge Graphs via Agentic Reinforcement Learning [22.089705008812217]
Temporal knowledge graph question answering (TKGQA) aims to answer time-sensitive questions by leveraging temporal knowledge bases.<n>Current prompting strategies constrain their efficacy in two primary ways.<n>We propose textbfTKG-Thinker, a novel agent equipped with autonomous planning and adaptive retrieval capabilities.
arXiv Detail & Related papers (2026-02-05T16:08:36Z) - Knowledge Distillation for Temporal Knowledge Graph Reasoning with Large Language Models [8.46502493796591]
Reasoning over temporal knowledge graphs (TKGs) is fundamental to improving the efficiency and reliability of intelligent decision-making systems.<n>Existing TKG reasoning models typically rely on large parameter sizes and intensive computation.<n>We propose a distillation framework specifically tailored for temporal knowledge graph reasoning.
arXiv Detail & Related papers (2026-01-01T04:38:00Z) - Thinking with Drafts: Speculative Temporal Reasoning for Efficient Long Video Understanding [56.7383554589569]
Long video understanding is essential for human-like intelligence, enabling coherent perception and reasoning over extended temporal contexts.<n>We propose SpecTemp, a reinforcement learning-based Speculative Temporal reasoning framework.<n>We show that SpecTemp not only maintains competitive accuracy but also significantly accelerates inference compared with existing thinking-with-frames methods.
arXiv Detail & Related papers (2025-11-30T09:27:59Z) - Towards Foundation Model on Temporal Knowledge Graph Reasoning [17.165969719351125]
Temporal Knowledge Graphs (TKGs) store temporal facts with quadruple formats (s, p, o, t)<n>New model employs sinusoidal positional encodings to capture fine-grained temporal patterns.<n>PostRA demonstrates strong zero-shot performance on unseen temporal knowledge graphs.
arXiv Detail & Related papers (2025-06-04T09:19:49Z) - StPR: Spatiotemporal Preservation and Routing for Exemplar-Free Video Class-Incremental Learning [79.44594332189018]
Class-Incremental Learning (CIL) seeks to develop models that continuously learn new action categories over time without previously acquired knowledge.<n>Existing approaches either rely on forgetting, raising concerns over memory and privacy, or adapt static image-based methods that neglect temporal modeling.<n>We propose a unified and exemplar-free VCIL framework that explicitly disentangles and preserves information.
arXiv Detail & Related papers (2025-05-20T06:46:51Z) - Efficient Multivariate Time Series Forecasting via Calibrated Language Models with Privileged Knowledge Distillation [25.23821206253495]
TimeKD aims to generate high-quality future representations from the proposed cross-modality teacher model.<n>To cultivate an effective student model, we propose an innovative privileged knowledge distillation (PKD) mechanism.
arXiv Detail & Related papers (2025-05-04T14:57:42Z) - Learning from Stochastic Teacher Representations Using Student-Guided Knowledge Distillation [64.15918654558816]
Self-distillation (SSD) training strategy is introduced for filtering and weighting teacher representation to distill from task-relevant representations only.<n> Experimental results on real-world affective computing, wearable/biosignal datasets from the UCR Archive, the HAR dataset, and image classification datasets show that the proposed SSD method can outperform state-of-the-art methods.
arXiv Detail & Related papers (2025-04-19T14:08:56Z) - Integrate Temporal Graph Learning into LLM-based Temporal Knowledge Graph Model [48.15492235240126]
Temporal Knowledge Graph Forecasting aims to predict future events based on the observed events in history.<n>Existing methods have integrated retrieved historical facts or static graph representations into Large Language Models (LLMs)<n>We propose a novel framework TGL-LLM to integrate temporal graph learning into LLM-based temporal knowledge graph model.
arXiv Detail & Related papers (2025-01-21T06:12:49Z) - Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning [87.10396098919013]
Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning.<n>We propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on Temporal Knowledge Graphs.<n>LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules.
arXiv Detail & Related papers (2024-05-23T04:54:37Z) - Improving Long-Horizon Imitation Through Instruction Prediction [93.47416552953075]
In this work, we explore the use of an often unused source of auxiliary supervision: language.
Inspired by recent advances in transformer-based models, we train agents with an instruction prediction loss that encourages learning temporally extended representations that operate at a high level of abstraction.
In further analysis we find that instruction modeling is most important for tasks that require complex reasoning, while understandably offering smaller gains in environments that require simple plans.
arXiv Detail & Related papers (2023-06-21T20:47:23Z) - Temporal Knowledge Graph Reasoning with Low-rank and Model-agnostic
Representations [1.8262547855491458]
We introduce Time-LowFER, a family of parameter-efficient and time-aware extensions of the low-rank tensor factorization model LowFER.
Noting several limitations in current approaches to represent time, we propose a cycle-aware time-encoding scheme for time features.
We implement our methods in a unified temporal knowledge graph embedding framework, focusing on time-sensitive data processing.
arXiv Detail & Related papers (2022-04-10T22:24:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.