Towards Continual Knowledge Graph Embedding via Incremental Distillation
- URL: http://arxiv.org/abs/2405.04453v1
- Date: Tue, 7 May 2024 16:16:00 GMT
- Title: Towards Continual Knowledge Graph Embedding via Incremental Distillation
- Authors: Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu Shang, Jinhua Gao, Guozheng Li, Ke Ji, Yanhe Liu,
- Abstract summary: Traditional knowledge graph embedding (KGE) methods typically require preserving the entire knowledge graph (KG) with significant training costs when new knowledge emerges.
This paper proposes a competitive method for CKGE based on incremental distillation (IncDE), which considers the full use of the explicit graph structure in KGs.
- Score: 12.556752486002356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional knowledge graph embedding (KGE) methods typically require preserving the entire knowledge graph (KG) with significant training costs when new knowledge emerges. To address this issue, the continual knowledge graph embedding (CKGE) task has been proposed to train the KGE model by learning emerging knowledge efficiently while simultaneously preserving decent old knowledge. However, the explicit graph structure in KGs, which is critical for the above goal, has been heavily ignored by existing CKGE methods. On the one hand, existing methods usually learn new triples in a random order, destroying the inner structure of new KGs. On the other hand, old triples are preserved with equal priority, failing to alleviate catastrophic forgetting effectively. In this paper, we propose a competitive method for CKGE based on incremental distillation (IncDE), which considers the full use of the explicit graph structure in KGs. First, to optimize the learning order, we introduce a hierarchical strategy, ranking new triples for layer-by-layer learning. By employing the inter- and intra-hierarchical orders together, new triples are grouped into layers based on the graph structure features. Secondly, to preserve the old knowledge effectively, we devise a novel incremental distillation mechanism, which facilitates the seamless transfer of entity representations from the previous layer to the next one, promoting old knowledge preservation. Finally, we adopt a two-stage training paradigm to avoid the over-corruption of old knowledge influenced by under-trained new knowledge. Experimental results demonstrate the superiority of IncDE over state-of-the-art baselines. Notably, the incremental distillation mechanism contributes to improvements of 0.2%-6.5% in the mean reciprocal rank (MRR) score.
Related papers
- Subgraph-Aware Training of Language Models for Knowledge Graph Completion Using Structure-Aware Contrastive Learning [4.741342276627672]
Fine-tuning pre-trained language models (PLMs) has recently shown a potential to improve knowledge graph completion (KGC)
We propose a Subgraph-Aware Training framework for KGC (SATKGC) with two ideas: (i) subgraph-aware mini-batching to encourage hard negative sampling and to mitigate an imbalance in the frequency of entity occurrences during training, and (ii) new contrastive learning to focus more on harder in-batch negative triples and harder positive triples in terms of the structural properties of the knowledge graph.
arXiv Detail & Related papers (2024-07-17T16:25:37Z) - Fast and Continual Knowledge Graph Embedding via Incremental LoRA [20.624310261539694]
Continual Knowledge Graph Embedding aims to efficiently learn new knowledge and simultaneously preserve old knowledge.
We propose a fast CKGE framework (model) incorporating an incremental low-rank adapter (mec) mechanism to efficiently acquire new knowledge.
We conduct experiments on four public datasets and two new datasets with a larger initial scale.
arXiv Detail & Related papers (2024-07-08T08:07:13Z) - Preserving Node Distinctness in Graph Autoencoders via Similarity Distillation [9.395697548237333]
Graph autoencoders (GAEs) rely on distance-based criteria, such as mean-square-error (MSE) to reconstruct the input graph.
relying solely on a single reconstruction criterion may lead to a loss of distinctiveness in the reconstructed graph.
We have developed a simple yet effective strategy to preserve the necessary distinctness in the reconstructed graph.
arXiv Detail & Related papers (2024-06-25T12:54:35Z) - PUMA: Efficient Continual Graph Learning for Node Classification with Graph Condensation [49.00940417190911]
Existing graph representation learning models encounter a catastrophic problem when learning with newly incoming graphs.
In this paper, we propose a PUdo-label guided Memory bAnkrogation (PUMA) framework to enhance its efficiency and effectiveness.
arXiv Detail & Related papers (2023-12-22T05:09:58Z) - From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited [51.24526202984846]
Graph-based semi-supervised learning (GSSL) has long been a hot research topic.
graph convolutional networks (GCNs) have become the predominant techniques for their promising performance.
arXiv Detail & Related papers (2023-09-24T10:10:21Z) - Evolving Knowledge Mining for Class Incremental Segmentation [113.59611699693092]
Class Incremental Semantic (CISS) has been a trend recently due to its great significance in real-world applications.
We propose a novel method, Evolving kNowleDge minING, employing a frozen backbone.
We evaluate our method on two widely used benchmarks and consistently demonstrate new state-of-the-art performance.
arXiv Detail & Related papers (2023-06-03T07:03:15Z) - Repurposing Knowledge Graph Embeddings for Triple Representation via
Weak Supervision [77.34726150561087]
Current methods learn triple embeddings from scratch without utilizing entity and predicate embeddings from pre-trained models.
We develop a method for automatically sampling triples from a knowledge graph and estimating their pairwise similarities from pre-trained embedding models.
These pairwise similarity scores are then fed to a Siamese-like neural architecture to fine-tune triple representations.
arXiv Detail & Related papers (2022-08-22T14:07:08Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.