Chain-of-History Reasoning for Temporal Knowledge Graph Forecasting
- URL: http://arxiv.org/abs/2402.14382v2
- Date: Fri, 7 Jun 2024 08:15:18 GMT
- Title: Chain-of-History Reasoning for Temporal Knowledge Graph Forecasting
- Authors: Yuwei Xia, Ding Wang, Qiang Liu, Liang Wang, Shu Wu, Xiaoyu Zhang,
- Abstract summary: Temporal Knowledge Graph (TKG) forecasting aims to predict future facts based on given histories.
Most recent graph-based models excel at capturing structural information within TKGs but lack semantic comprehension abilities.
We propose Chain-of-History (CoH) reasoning which explores high-order histories step-by-step.
- Score: 32.711428457485596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Knowledge Graph (TKG) forecasting aims to predict future facts based on given histories. Most recent graph-based models excel at capturing structural information within TKGs but lack semantic comprehension abilities. Nowadays, with the surge of LLMs, the LLM-based TKG prediction model has emerged. However, the existing LLM-based model exhibits three shortcomings: (1) It only focuses on the first-order history for prediction while ignoring high-order historical information, resulting in the provided information for LLMs being extremely limited. (2) LLMs struggle with optimal reasoning performance under heavy historical information loads. (3) For TKG prediction, the temporal reasoning capability of LLM alone is limited. To address the first two challenges, we propose Chain-of-History (CoH) reasoning which explores high-order histories step-by-step, achieving effective utilization of high-order historical information for LLMs on TKG prediction. To address the third issue, we design CoH as a plug-and-play module to enhance the performance of graph-based models for TKG prediction. Extensive experiments on three datasets and backbones demonstrate the effectiveness of CoH.
Related papers
- Ignite Forecasting with SPARK: An Efficient Generative Framework for Refining LLMs in Temporal Knowledge Graph Forecasting [13.402856325579236]
We introduce SPARK, a Sequence-level Proxy framework for refining Large Language Models in TKG forecasting.
Inspired by inference-time algorithms, SPARK offers a cost-effective, plug-and-play solution through two key innovations.
Experiments across diverse datasets validate SPARK's forecasting performance, robust generalization capabilities, and high efficiency.
arXiv Detail & Related papers (2025-03-27T03:02:02Z) - Integrate Temporal Graph Learning into LLM-based Temporal Knowledge Graph Model [48.15492235240126]
Temporal Knowledge Graph Forecasting aims to predict future events based on the observed events in history.
Existing methods have integrated retrieved historical facts or static graph representations into Large Language Models (LLMs)
We propose a novel framework TGL-LLM to integrate temporal graph learning into LLM-based temporal knowledge graph model.
arXiv Detail & Related papers (2025-01-21T06:12:49Z) - Is Large Language Model Good at Triple Set Prediction? An Empirical Study [12.094218772036596]
The framework consists of LLM-based rule mining and LLM-based triple set prediction.
The experimental results indicate that when LLMs are required to adhere to a large amount of factual knowledge to predict missing triples, significant hallucinations occurs, leading to a noticeable decline in performance.
arXiv Detail & Related papers (2024-12-24T14:03:07Z) - Predicting Emergent Capabilities by Finetuning [98.9684114851891]
We find that finetuning language models can shift the point in scaling at which emergence occurs towards less capable models.
We validate this approach using four standard NLP benchmarks.
We find that, in some cases, we can accurately predict whether models trained with up to 4x more compute have emerged.
arXiv Detail & Related papers (2024-11-25T01:48:09Z) - Beyond Right and Wrong: Mitigating Cold Start in Knowledge Tracing Using Large Language Model and Option Weight [0.14999444543328289]
Knowledge Tracing (KT) is vital in educational data mining, enabling personalized learning.
This study introduces the LOKT (Large Language Model Option-weighted Knowledge Tracing) model to address the cold start problem.
arXiv Detail & Related papers (2024-10-14T16:25:48Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - Empirical Insights on Fine-Tuning Large Language Models for Question-Answering [50.12622877002846]
Large language models (LLMs) encode extensive world knowledge through pre-training on massive datasets, which can be fine-tuned for the question-answering (QA) task.
We categorize supervised fine-tuning (SFT) data based on the extent of knowledge memorized by the pretrained LLMs.
Our experiments show that as few as 60 data points during the SFT stage can activate the knowledge encoded during pre-training, enabling LLMs to perform the QA task.
arXiv Detail & Related papers (2024-09-24T07:38:38Z) - Retrieval-Augmented Generation Meets Data-Driven Tabula Rasa Approach for Temporal Knowledge Graph Forecasting [0.0]
sLA-tKGF is a small-scale language assistant for temporal Knowledge Graph (tKG) forecasting.
Our framework constructs knowledge-infused prompts with historical data from tKGs and web search results.
It reduces hallucinations and mitigates distributional shift challenges through comprehending changing trends over time.
arXiv Detail & Related papers (2024-08-18T11:52:24Z) - A Comprehensive Evaluation of Large Language Models on Temporal Event Forecasting [45.0261082985087]
We conduct a comprehensive evaluation of Large Language Models (LLMs) for temporal event forecasting.
We find that directly integrating raw texts into the input of LLMs does not enhance zero-shot extrapolation performance.
In contrast, incorporating raw texts in specific complex events and fine-tuning LLMs significantly improves performance.
arXiv Detail & Related papers (2024-07-16T11:58:54Z) - Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs [59.76268575344119]
We introduce a novel framework for enhancing large language models' (LLMs) planning capabilities by using planning data derived from knowledge graphs (KGs)
LLMs fine-tuned with KG data have improved planning capabilities, better equipping them to handle complex QA tasks that involve retrieval.
arXiv Detail & Related papers (2024-06-20T13:07:38Z) - Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning [87.10396098919013]
Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning.
We propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on Temporal Knowledge Graphs.
LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules.
arXiv Detail & Related papers (2024-05-23T04:54:37Z) - Selective Temporal Knowledge Graph Reasoning [70.11788354442218]
Temporal Knowledge Graph (TKG) aims to predict future facts based on given historical ones.
Existing TKG reasoning models are unable to abstain from predictions they are uncertain.
We propose an abstention mechanism for TKG reasoning, which helps the existing models make selective, instead of indiscriminate, predictions.
arXiv Detail & Related papers (2024-04-02T06:56:21Z) - Chain of History: Learning and Forecasting with LLMs for Temporal
Knowledge Graph Completion [24.545917737620197]
Temporal Knowledge Graph Completion (TKGC) is a complex task involving the prediction of missing event links at future timestamps.
This paper aims to provide a comprehensive perspective on harnessing the advantages of Large Language Models for reasoning in temporal knowledge graphs.
arXiv Detail & Related papers (2024-01-11T17:42:47Z) - Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context
Learning [23.971206470486468]
We present a framework that converts relevant historical facts into prompts and generates ranked predictions using token probabilities.
Surprisingly, we observe that LLMs, out-of-the-box, perform on par with state-of-the-art TKG models.
We also discover that using numerical indices instead of entity/relation names, does not significantly affect the performance.
arXiv Detail & Related papers (2023-05-17T23:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.