Joint Pre-training and Local Re-training: Transferable Representation
Learning on Multi-source Knowledge Graphs
- URL: http://arxiv.org/abs/2306.02679v1
- Date: Mon, 5 Jun 2023 08:11:59 GMT
- Title: Joint Pre-training and Local Re-training: Transferable Representation
Learning on Multi-source Knowledge Graphs
- Authors: Zequn Sun and Jiacheng Huang and Jinghao Lin and Xiaozhou Xu and Qijin
Chen and Wei Hu
- Abstract summary: We pre-train a large teacher KG embedding model over linked multi-source KGs and distill knowledge to train a student model for a task-specific KG.
We conduct extensive experiments to demonstrate the effectiveness and efficiency of our framework.
- Score: 17.78174810566248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present the ``joint pre-training and local re-training''
framework for learning and applying multi-source knowledge graph (KG)
embeddings. We are motivated by the fact that different KGs contain
complementary information to improve KG embeddings and downstream tasks. We
pre-train a large teacher KG embedding model over linked multi-source KGs and
distill knowledge to train a student model for a task-specific KG. To enable
knowledge transfer across different KGs, we use entity alignment to build a
linked subgraph for connecting the pre-trained KGs and the target KG. The
linked subgraph is re-trained for three-level knowledge distillation from the
teacher to the student, i.e., feature knowledge distillation, network knowledge
distillation, and prediction knowledge distillation, to generate more
expressive embeddings. The teacher model can be reused for different target KGs
and tasks without having to train from scratch. We conduct extensive
experiments to demonstrate the effectiveness and efficiency of our framework.
Related papers
- Towards a Knowledge Graph for Teaching Knowledge Graphs [2.59358872905719]
This poster paper describes the ongoing research project for the creation of a use-case-driven Knowledge Graph resource tailored to the needs of teaching education in Knowledge Graphs (KGs)
We gather resources related to KG courses from lectures offered by the Semantic Web community, with the help of the COST Action Distributed Knowledge Graphs and the interest group on KGs at The Alan Turing Institute.
Our goal is to create a resource-focused KG with multiple interconnected semantic layers that interlink topics, courses, and materials with each lecturer.
arXiv Detail & Related papers (2024-11-02T16:39:45Z) - Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - On the Sweet Spot of Contrastive Views for Knowledge-enhanced
Recommendation [49.18304766331156]
We propose a new contrastive learning framework for KG-enhanced recommendation.
We construct two separate contrastive views for KG and IG, and maximize their mutual information.
Extensive experimental results on three real-world datasets demonstrate the effectiveness and efficiency of our method.
arXiv Detail & Related papers (2023-09-23T14:05:55Z) - Collective Knowledge Graph Completion with Mutual Knowledge Distillation [11.922522192224145]
We study the problem of multi-KG completion, where we focus on maximizing the collective knowledge from different KGs.
We propose a novel method called CKGC-CKD that uses relation-aware graph convolutional network encoder models on both individual KGs and a large fused KG.
Experimental results on multilingual datasets have shown that our method outperforms all state-of-the-art models in the KGC task.
arXiv Detail & Related papers (2023-05-25T09:49:40Z) - Lifelong Embedding Learning and Transfer for Growing Knowledge Graphs [22.88552158340435]
Existing knowledge graph embedding models primarily focus on static KGs.
New facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth.
We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch.
The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting.
arXiv Detail & Related papers (2022-11-29T00:43:44Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus [59.20235923987045]
It is challenging to enrich a Knowledge Graph with newly harvested triples while maintaining the quality of the knowledge representation.
This paper proposes a system to refine a KG using information harvested from an additional corpus.
arXiv Detail & Related papers (2022-06-15T12:16:10Z) - Meta-Learning Based Knowledge Extrapolation for Knowledge Graphs in the
Federated Setting [43.85991094675398]
We study the knowledge extrapolation problem to embed new components (i.e., entities and relations) that come with emerging knowledge graphs (KGs) in the federated setting.
In this problem, a model trained on an existing KG needs to embed an emerging KG with unseen entities and relations.
We introduce the meta-learning setting, where a set of tasks are sampled on the existing KG to mimic the link prediction task on the emerging KG.
Based on sampled tasks, we meta-train a graph neural network framework that can construct features for unseen components based on structural information and output embeddings for them.
arXiv Detail & Related papers (2022-05-10T06:27:32Z) - Standing on the Shoulders of Predecessors: Meta-Knowledge Transfer for
Knowledge Graphs [8.815143812846392]
We call such knowledge meta-knowledge, and refer to the problem of transferring meta-knowledge from constructed (source) KGs to new (target) KGs.
MorsE represents the meta-knowledge via Knowledge Graph Embedding and learns the meta-knowledge by Meta-Learning.
MorsE is able to learn and transfer meta-knowledge between KGs effectively, and outperforms existing state-of-the-art models.
arXiv Detail & Related papers (2021-10-27T04:57:16Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.