Multi-level Shared Knowledge Guided Learning for Knowledge Graph Completion
- URL: http://arxiv.org/abs/2405.06696v1
- Date: Wed, 8 May 2024 03:27:46 GMT
- Title: Multi-level Shared Knowledge Guided Learning for Knowledge Graph Completion
- Authors: Yongxue Shan, Jie Zhou, Jie Peng, Xin Zhou, Jiaqian Yin, Xiaodong Wang,
- Abstract summary: We introduce a multi-level Shared Knowledge Guided learning method (SKG) that operates at both the dataset and task levels.
On the dataset level, SKG-KGC broadens the original dataset by identifying shared features within entity sets via text summarization.
On the task level, for the three typical KGC subtasks - head entity prediction, relation prediction, and tail entity prediction - we present an innovative multi-task learning architecture with dynamically adjusted loss weights.
- Score: 26.40236457109129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the task of Knowledge Graph Completion (KGC), the existing datasets and their inherent subtasks carry a wealth of shared knowledge that can be utilized to enhance the representation of knowledge triplets and overall performance. However, no current studies specifically address the shared knowledge within KGC. To bridge this gap, we introduce a multi-level Shared Knowledge Guided learning method (SKG) that operates at both the dataset and task levels. On the dataset level, SKG-KGC broadens the original dataset by identifying shared features within entity sets via text summarization. On the task level, for the three typical KGC subtasks - head entity prediction, relation prediction, and tail entity prediction - we present an innovative multi-task learning architecture with dynamically adjusted loss weights. This approach allows the model to focus on more challenging and underperforming tasks, effectively mitigating the imbalance of knowledge sharing among subtasks. Experimental results demonstrate that SKG-KGC outperforms existing text-based methods significantly on three well-known datasets, with the most notable improvement on WN18RR.
Related papers
- Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels [27.53399899573121]
We propose an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning.
Experimental results across various tasks, including document-level relation extraction, demonstrate the generalization and effectiveness of our framework.
arXiv Detail & Related papers (2024-06-24T03:36:19Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - A Survey on Temporal Knowledge Graph Completion: Taxonomy, Progress, and
Prospects [73.44022660932087]
temporal characteristics are prominently evident in a substantial volume of knowledge.
The continuous emergence of new knowledge, the weakness of the algorithm for extracting structured information from unstructured data, and the lack of information in the source dataset are cited.
The task of Temporal Knowledge Graph Completion (TKGC) has attracted increasing attention, aiming to predict missing items based on the available information.
arXiv Detail & Related papers (2023-08-04T16:49:54Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Hierarchical Relational Learning for Few-Shot Knowledge Graph Completion [25.905974480733562]
We propose a hierarchical relational learning method (HiRe) for few-shot KG completion.
By jointly capturing three levels of relational information, HiRe can effectively learn and refine the meta representation of few-shot relations.
Experiments on two benchmark datasets validate the superiority of HiRe against other state-of-the-art methods.
arXiv Detail & Related papers (2022-09-02T17:57:03Z) - KGNN: Distributed Framework for Graph Neural Knowledge Representation [38.080926752998586]
We develop a novel framework called KGNN to take full advantage of knowledge data for representation learning in the distributed learning system.
KGNN is equipped with GNN based encoder and knowledge aware decoder, which aim to jointly explore high-order structure and attribute information together.
arXiv Detail & Related papers (2022-05-17T12:32:02Z) - Knowledge-Aware Meta-learning for Low-Resource Text Classification [87.89624590579903]
This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
arXiv Detail & Related papers (2021-09-10T07:20:43Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization
and Completion [99.47414073164656]
A comprehensive knowledge graph (KG) contains an instance-level entity graph and an ontology-level concept graph.
The two-view KG provides a testbed for models to "simulate" human's abilities on knowledge abstraction, concretization, and completion.
We propose a unified KG benchmark by improving existing benchmarks in terms of dataset scale, task coverage, and difficulty.
arXiv Detail & Related papers (2020-04-28T16:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.