Learning to Evolve: Bayesian-Guided Continual Knowledge Graph Embedding
- URL: http://arxiv.org/abs/2508.02426v1
- Date: Mon, 04 Aug 2025 13:46:33 GMT
- Title: Learning to Evolve: Bayesian-Guided Continual Knowledge Graph Embedding
- Authors: Linyu Li, Zhi Jin, Yuanpeng He, Dongming Jin, Yichi Zhang, Haoran Duan, Nyima Tash,
- Abstract summary: Key challenge facing continual knowledge graph embedding (CKGE) is that the model is prone to "catastrophic forgetting"<n>In order to effectively alleviate this problem, we propose a new CKGE model BAKE.
- Score: 20.479556500981044
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Since knowledge graphs (KG) will continue to evolve in real scenarios, traditional KGE models are only suitable for static knowledge graphs. Therefore, continual knowledge graph embedding (CKGE) has attracted the attention of researchers. Currently, a key challenge facing CKGE is that the model is prone to "catastrophic forgetting", resulting in the loss of previously learned knowledge. In order to effectively alleviate this problem, we propose a new CKGE model BAKE. First, we note that the Bayesian posterior update principle provides a natural continual learning strategy that is insensitive to data order and can theoretically effectively resist the forgetting of previous knowledge during data evolution. Different from the existing CKGE method, BAKE regards each batch of new data as a Bayesian update of the model prior. Under this framework, as long as the posterior distribution of the model is maintained, the model can better preserve the knowledge of early snapshots even after evolving through multiple time snapshots. Secondly, we propose a continual clustering method for CKGE, which further directly combats knowledge forgetting by constraining the evolution difference (or change amplitude) between new and old knowledge between different snapshots. We conduct extensive experiments on BAKE on multiple datasets, and the results show that BAKE significantly outperforms existing baseline models.
Related papers
- Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning [51.0864247376786]
We introduce a Knowledge Graph Enhanced Generative Multi-modal model (KG-GMM) that builds an evolving knowledge graph throughout the learning process.<n>During testing, we propose a Knowledge Graph Augmented Inference method that locates specific categories by analyzing relationships within the generated text.
arXiv Detail & Related papers (2025-03-24T07:20:43Z) - SoTCKGE:Continual Knowledge Graph Embedding Based on Spatial Offset Transformation [7.706481522285466]
Current Continual Knowledge Graph Embedding (CKGE) methods rely on translation-based embedding methods.<n>We propose a novel CKGE framework grounded in Spatial Offset Transformation vectors.<n>We introduce a hierarchical update strategy and a balanced embedding method to refine the parameter update process.
arXiv Detail & Related papers (2025-03-11T08:54:03Z) - Towards Continual Knowledge Graph Embedding via Incremental Distillation [12.556752486002356]
Traditional knowledge graph embedding (KGE) methods typically require preserving the entire knowledge graph (KG) with significant training costs when new knowledge emerges.
This paper proposes a competitive method for CKGE based on incremental distillation (IncDE), which considers the full use of the explicit graph structure in KGs.
arXiv Detail & Related papers (2024-05-07T16:16:00Z) - History repeats Itself: A Baseline for Temporal Knowledge Graph Forecasting [10.396081172890025]
Temporal Knowledge Graph (TKG) Forecasting aims at predicting links in Knowledge Graphs for future timesteps based on a history of Knowledge Graphs.
We propose to design an intuitive baseline for TKG Forecasting based on predicting recurring facts.
arXiv Detail & Related papers (2024-04-25T16:39:32Z) - Few-Shot Class-Incremental Learning with Prior Knowledge [94.95569068211195]
We propose Learning with Prior Knowledge (LwPK) to enhance the generalization ability of the pre-trained model.
Experimental results indicate that LwPK effectively enhances the model resilience against catastrophic forgetting.
arXiv Detail & Related papers (2024-02-02T08:05:35Z) - Overcoming Generic Knowledge Loss with Selective Parameter Update [48.240683797965005]
We propose a novel approach to continuously update foundation models.
Instead of updating all parameters equally, we localize the updates to a sparse set of parameters relevant to the task being learned.
Our method achieves improvements on the accuracy of the newly learned tasks up to 7% while preserving the pretraining knowledge with a negligible decrease of 0.9% on a representative control set accuracy.
arXiv Detail & Related papers (2023-08-23T22:55:45Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Complex Evolutional Pattern Learning for Temporal Knowledge Graph
Reasoning [60.94357727688448]
TKG reasoning aims to predict potential facts in the future given the historical KG sequences.
The evolutional patterns are complex in two aspects, length-diversity and time-variability.
We propose a new model, called Complex Evolutional Network (CEN), which uses a length-aware Convolutional Neural Network (CNN) to handle evolutional patterns of different lengths.
arXiv Detail & Related papers (2022-03-15T11:02:55Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - Tucker decomposition-based Temporal Knowledge Graph Completion [35.56360622521721]
We build a new tensor decomposition model for temporal knowledge graphs completion inspired by the Tucker decomposition of order 4 tensor.
We demonstrate that the proposed model is fully expressive and report state-of-the-art results for several public benchmarks.
arXiv Detail & Related papers (2020-11-16T07:05:52Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.