A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2506.04083v1
- Date: Wed, 04 Jun 2025 15:44:50 GMT
- Title: A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning
- Authors: Zhiyu Zhang, Wei Chen, Youfang Lin, Huaiyu Wan,
- Abstract summary: We propose a Deep Generative Adaptive Replay (DGAR) method, which can generate and adaptively replay historical entity distribution representations.<n> Experimental results demonstrate that DGAR significantly outperforms baselines in reasoning and mitigating forgetting.
- Score: 24.377657990045503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent Continual Learning (CL)-based Temporal Knowledge Graph Reasoning (TKGR) methods focus on significantly reducing computational cost and mitigating catastrophic forgetting caused by fine-tuning models with new data. However, existing CL-based TKGR methods still face two key limitations: (1) They usually one-sidedly reorganize individual historical facts, while overlooking the historical context essential for accurately understanding the historical semantics of these facts; (2) They preserve historical knowledge by simply replaying historical facts, while ignoring the potential conflicts between historical and emerging facts. In this paper, we propose a Deep Generative Adaptive Replay (DGAR) method, which can generate and adaptively replay historical entity distribution representations from the whole historical context. To address the first challenge, historical context prompts as sampling units are built to preserve the whole historical context information. To overcome the second challenge, a pre-trained diffusion model is adopted to generate the historical distribution. During the generation process, the common features between the historical and current distributions are enhanced under the guidance of the TKGR model. In addition, a layer-by-layer adaptive replay mechanism is designed to effectively integrate historical and current distributions. Experimental results demonstrate that DGAR significantly outperforms baselines in reasoning and mitigating forgetting.
Related papers
- History-Guided Video Diffusion [61.03681839276652]
Video diffusion generates video conditioned on a variable number of context frames, collectively referred to as history.<n>We find two key challenges to guiding with variable-length history: architectures that only support fixed-size conditioning, and the empirical observation that CFG-style history dropout performs poorly.<n>We introduce History Guidance, a family of guidance methods uniquely enabled by DFoT.
arXiv Detail & Related papers (2025-02-10T18:44:25Z) - CSTA: Spatial-Temporal Causal Adaptive Learning for Exemplar-Free Video Class-Incremental Learning [62.69917996026769]
A class-incremental learning task requires learning and preserving both spatial appearance and temporal action involvement.<n>We propose a framework that equips separate adapters to learn new class patterns, accommodating the incremental information requirements unique to each class.<n>A causal compensation mechanism is proposed to reduce the conflicts during increment and memorization for between different types of information.
arXiv Detail & Related papers (2025-01-13T11:34:55Z) - Historically Relevant Event Structuring for Temporal Knowledge Graph Reasoning [4.705577684291238]
Temporal Knowledge Graph (TKG) reasoning focuses on predicting events through historical information within snapshots distributed on a timeline.<n>We propose an innovative TKG reasoning approach towards textbfHistorically textbfRelevant textbfEvents textbfStructuring (HisRES)
arXiv Detail & Related papers (2024-05-17T08:33:43Z) - Local-Global History-aware Contrastive Learning for Temporal Knowledge
Graph Reasoning [25.497749629866757]
We propose a novel blueLocal-blueglobal history-aware blueContrastive blueL model (blueLogCL) for temporal knowledge graphs.
For the first challenge, LogCL proposes an entity-aware attention mechanism applied to the local and global historical facts encoder.
For the latter issue, LogCL designs four historical query contrast patterns, effectively improving the robustness of the model.
arXiv Detail & Related papers (2023-12-04T03:27:01Z) - Exploring the Limits of Historical Information for Temporal Knowledge
Graph Extrapolation [59.417443739208146]
We propose a new event forecasting model based on a novel training framework of historical contrastive learning.
CENET learns both the historical and non-historical dependency to distinguish the most potential entities.
We evaluate our proposed model on five benchmark graphs.
arXiv Detail & Related papers (2023-08-29T03:26:38Z) - Continual Face Forgery Detection via Historical Distribution Preserving [88.66313037412846]
We focus on a novel and challenging problem: Continual Face Forgery Detection (CFFD)
CFFD aims to efficiently learn from new forgery attacks without forgetting previous ones.
Our experiments on the benchmarks show that our method outperforms the state-of-the-art competitors.
arXiv Detail & Related papers (2023-08-11T16:37:31Z) - Temporal Knowledge Graph Reasoning with Historical Contrastive Learning [24.492458924487863]
We propose a new event forecasting model called Contrastive Event Network (CENET)
CENET learns both the historical and non-historical dependency to distinguish the most potential entities that can best match the given query.
During the inference process, CENET employs a mask-based strategy to generate the final results.
arXiv Detail & Related papers (2022-11-20T08:32:59Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Recent Developments Combining Ensemble Smoother and Deep Generative
Networks for Facies History Matching [58.720142291102135]
This research project focuses on the use of autoencoders networks to construct a continuous parameterization for facies models.
We benchmark seven different formulations, including VAE, generative adversarial network (GAN), Wasserstein GAN, variational auto-encoding GAN, principal component analysis (PCA) with cycle GAN, PCA with transfer style network and VAE with style loss.
arXiv Detail & Related papers (2020-05-08T21:32:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.