Towards Robust Knowledge Graph Embedding via Multi-task Reinforcement
Learning
- URL: http://arxiv.org/abs/2111.06103v1
- Date: Thu, 11 Nov 2021 08:51:37 GMT
- Title: Towards Robust Knowledge Graph Embedding via Multi-task Reinforcement
Learning
- Authors: Zhao Zhang, Fuzhen Zhuang, Hengshu Zhu, Chao Li, Hui Xiong, Qing He
and Yongjun Xu
- Abstract summary: Most existing knowledge graph embedding methods assume that all the triple facts in KGs are correct.
This will lead to low-quality and unreliable representations of KGs.
We propose a general multi-task reinforcement learning framework, which can greatly alleviate the noisy data problem.
- Score: 44.38215560989223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, Knowledge graphs (KGs) have been playing a pivotal role in
AI-related applications. Despite the large sizes, existing KGs are far from
complete and comprehensive. In order to continuously enrich KGs, automatic
knowledge construction and update mechanisms are usually utilized, which
inevitably bring in plenty of noise. However, most existing knowledge graph
embedding (KGE) methods assume that all the triple facts in KGs are correct,
and project both entities and relations into a low-dimensional space without
considering noise and knowledge conflicts. This will lead to low-quality and
unreliable representations of KGs. To this end, in this paper, we propose a
general multi-task reinforcement learning framework, which can greatly
alleviate the noisy data problem. In our framework, we exploit reinforcement
learning for choosing high-quality knowledge triples while filtering out the
noisy ones. Also, in order to take full advantage of the correlations among
semantically similar relations, the triple selection processes of similar
relations are trained in a collective way with multi-task learning. Moreover,
we extend popular KGE models TransE, DistMult, ConvE and RotatE with the
proposed framework. Finally, the experimental validation shows that our
approach is able to enhance existing KGE models and can provide more robust
representations of KGs in noisy scenarios.
Related papers
- Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive
Learning [71.8876256714229]
We propose an entity-based contrastive learning framework for improving the robustness of knowledge-grounded dialogue systems.
Our method achieves new state-of-the-art performance in terms of automatic evaluation scores.
arXiv Detail & Related papers (2024-01-09T05:16:52Z) - Collective Knowledge Graph Completion with Mutual Knowledge Distillation [11.922522192224145]
We study the problem of multi-KG completion, where we focus on maximizing the collective knowledge from different KGs.
We propose a novel method called CKGC-CKD that uses relation-aware graph convolutional network encoder models on both individual KGs and a large fused KG.
Experimental results on multilingual datasets have shown that our method outperforms all state-of-the-art models in the KGC task.
arXiv Detail & Related papers (2023-05-25T09:49:40Z) - Hierarchical Relational Learning for Few-Shot Knowledge Graph Completion [25.905974480733562]
We propose a hierarchical relational learning method (HiRe) for few-shot KG completion.
By jointly capturing three levels of relational information, HiRe can effectively learn and refine the meta representation of few-shot relations.
Experiments on two benchmark datasets validate the superiority of HiRe against other state-of-the-art methods.
arXiv Detail & Related papers (2022-09-02T17:57:03Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Knowledge Graph Contrastive Learning for Recommendation [32.918864602360884]
We design a general Knowledge Graph Contrastive Learning framework to alleviate the information noise for knowledge graph-enhanced recommender systems.
Specifically, we propose a knowledge graph augmentation schema to suppress KG noise in information aggregation.
We exploit additional supervision signals from the KG augmentation process to guide a cross-view contrastive learning paradigm.
arXiv Detail & Related papers (2022-05-02T15:24:53Z) - Link-Intensive Alignment for Incomplete Knowledge Graphs [28.213397255810936]
In this work, we address the problem of aligning incomplete KGs with representation learning.
Our framework exploits two feature channels: transitivity-based and proximity-based.
The two feature channels are jointly learned to exchange important features between the input KGs.
Also, we develop a missing links detector that discovers and recovers the missing links during the training process.
arXiv Detail & Related papers (2021-12-17T00:41:28Z) - EngineKGI: Closed-Loop Knowledge Graph Inference [37.15381932994768]
EngineKGI is a novel closed-loop KG inference framework.
It combines KGE and rule learning to complement each other in a closed-loop pattern.
Our model outperforms other baselines on link prediction tasks.
arXiv Detail & Related papers (2021-12-02T08:02:59Z) - DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention
Network [48.38954651216983]
We propose a novel Disentangled Knowledge Graph Attention Network (DisenKGAT) for Knowledge graphs.
DisenKGAT uses both micro-disentanglement and macro-disentanglement to exploit representations behind Knowledge graphs.
Our work has strong robustness and flexibility to adapt to various score functions.
arXiv Detail & Related papers (2021-08-22T04:10:35Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.