I Know What You Do Not Know: Knowledge Graph Embedding via
Co-distillation Learning
- URL: http://arxiv.org/abs/2208.09828v1
- Date: Sun, 21 Aug 2022 07:34:37 GMT
- Title: I Know What You Do Not Know: Knowledge Graph Embedding via
Co-distillation Learning
- Authors: Yang Liu and Zequn Sun Guangyao Li and Wei Hu
- Abstract summary: Knowledge graph embedding seeks to learn vector representations for entities and relations.
Recent studies have used pre-trained language models to learn embeddings based on the textual information of entities and relations.
We propose CoLE, a Co-distillation Learning method for KG Embedding that exploits the complement of graph structures and text information.
- Score: 16.723470319188102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graph (KG) embedding seeks to learn vector representations for
entities and relations. Conventional models reason over graph structures, but
they suffer from the issues of graph incompleteness and long-tail entities.
Recent studies have used pre-trained language models to learn embeddings based
on the textual information of entities and relations, but they cannot take
advantage of graph structures. In the paper, we show empirically that these two
kinds of features are complementary for KG embedding. To this end, we propose
CoLE, a Co-distillation Learning method for KG Embedding that exploits the
complementarity of graph structures and text information. Its graph embedding
model employs Transformer to reconstruct the representation of an entity from
its neighborhood subgraph. Its text embedding model uses a pre-trained language
model to generate entity representations from the soft prompts of their names,
descriptions, and relational neighbors. To let the two model promote each
other, we propose co-distillation learning that allows them to distill
selective knowledge from each other's prediction logits. In our co-distillation
learning, each model serves as both a teacher and a student. Experiments on
benchmark datasets demonstrate that the two models outperform their related
baselines, and the ensemble method CoLE with co-distillation learning advances
the state-of-the-art of KG embedding.
Related papers
- Graph Relation Distillation for Efficient Biomedical Instance
Segmentation [80.51124447333493]
We propose a graph relation distillation approach for efficient biomedical instance segmentation.
We introduce two graph distillation schemes deployed at both the intra-image level and the inter-image level.
Experimental results on a number of biomedical datasets validate the effectiveness of our approach.
arXiv Detail & Related papers (2024-01-12T04:41:23Z) - Connector 0.5: A unified framework for graph representation learning [5.398580049917152]
We introduce a novel graph representation framework covering various graph embedding models, ranging from shallow to state-of-the-art models.
We plan to build an efficient open-source framework that can provide deep graph embedding models to represent structural relations in graphs.
arXiv Detail & Related papers (2023-04-25T23:28:38Z) - Distilling Knowledge from Self-Supervised Teacher by Embedding Graph
Alignment [52.704331909850026]
We formulate a new knowledge distillation framework to transfer the knowledge from self-supervised pre-trained models to any other student network.
Inspired by the spirit of instance discrimination in self-supervised learning, we model the instance-instance relations by a graph formulation in the feature embedding space.
Our distillation scheme can be flexibly applied to transfer the self-supervised knowledge to enhance representation learning on various student networks.
arXiv Detail & Related papers (2022-11-23T19:27:48Z) - DGEKT: A Dual Graph Ensemble Learning Method for Knowledge Tracing [20.71423236895509]
We present a novel Dual Graph Ensemble learning method for Knowledge Tracing (DGEKT)
DGEKT establishes a dual graph structure of students' learning interactions to capture the heterogeneous exercise-concept associations.
Online knowledge distillation provides its predictions on all exercises as extra supervision for better modeling ability.
arXiv Detail & Related papers (2022-11-23T11:37:35Z) - KGLM: Integrating Knowledge Graph Structure in Language Models for Link
Prediction [0.0]
We introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types.
We show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.
arXiv Detail & Related papers (2022-11-04T20:38:12Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Learning Representations of Entities and Relations [0.0]
This thesis focuses on improving knowledge graph representation with the aim of tackling the link prediction task.
The first contribution is HypER, a convolutional model which simplifies and improves upon the link prediction performance.
The second contribution is TuckER, a relatively straightforward linear model, which, at the time of its introduction, obtained state-of-the-art link prediction performance.
The third contribution is MuRP, first multi-relational graph representation model embedded in hyperbolic space.
arXiv Detail & Related papers (2022-01-31T09:24:43Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.