VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure
Knowledge on Sparse Knowledge Graph Completion
- URL: http://arxiv.org/abs/2207.01528v1
- Date: Mon, 4 Jul 2022 15:50:21 GMT
- Title: VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure
Knowledge on Sparse Knowledge Graph Completion
- Authors: Tao He, Tianwen Jiang, Zihao Zheng, Haichao Zhu, Jingrun Zhang, Ming
Liu, Sendong Zhao and Bin Qin
- Abstract summary: We propose a plug-and-play framework VEM2L over sparse Knowledge Graphs to fuse knowledge extracted from text and structure messages into a unity.
Specifically, we partition knowledge acquired by models into two nonoverlapping parts.
We also propose a new fusion strategy proved by Variational EM algorithm to fuse the generalization ability of models.
- Score: 14.537509860565706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph Completion has been widely studied recently to complete
missing elements within triples via mainly modeling graph structural features,
but performs sensitive to the sparsity of graph structure. Relevant texts like
entity names and descriptions, acting as another expression form for Knowledge
Graphs (KGs), are expected to solve this challenge. Several methods have been
proposed to utilize both structure and text messages with two encoders, but
only achieved limited improvements due to the failure to balance weights
between them. And reserving both structural and textual encoders during
inference also suffers from heavily overwhelmed parameters. Motivated by
Knowledge Distillation, we view knowledge as mappings from input to output
probabilities and propose a plug-and-play framework VEM2L over sparse KGs to
fuse knowledge extracted from text and structure messages into a unity.
Specifically, we partition knowledge acquired by models into two nonoverlapping
parts: one part is relevant to the fitting capacity upon training triples,
which could be fused by motivating two encoders to learn from each other on
training sets; the other reflects the generalization ability upon unobserved
queries. And correspondingly, we propose a new fusion strategy proved by
Variational EM algorithm to fuse the generalization ability of models, during
which we also apply graph densification operations to further alleviate the
sparse graph problem. By combining these two fusion methods, we propose VEM2L
framework finally. Both detailed theoretical evidence, as well as quantitative
and qualitative experiments, demonstrates the effectiveness and efficiency of
our proposed framework.
Related papers
- iText2KG: Incremental Knowledge Graphs Construction Using Large Language Models [0.7165255458140439]
iText2KG is a method for incremental, topic-independent Knowledge Graph construction without post-processing.
Our method demonstrates superior performance compared to baseline methods across three scenarios.
arXiv Detail & Related papers (2024-09-05T06:49:14Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Investigating Graph Structure Information for Entity Alignment with
Dangling Cases [31.779386064600956]
Entity alignment aims to discover the equivalent entities in different knowledge graphs (KGs)
We propose a novel entity alignment framework called Weakly-optimal Graph Contrastive Learning (WOGCL)
We show that WOGCL outperforms the current state-of-the-art methods with pure structural information in both traditional (relaxed) and dangling settings.
arXiv Detail & Related papers (2023-04-10T17:24:43Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - SEEK: Segmented Embedding of Knowledge Graphs [77.5307592941209]
We propose a lightweight modeling framework that can achieve highly competitive relational expressiveness without increasing the model complexity.
Our framework focuses on the design of scoring functions and highlights two critical characteristics: 1) facilitating sufficient feature interactions; 2) preserving both symmetry and antisymmetry properties of relations.
arXiv Detail & Related papers (2020-05-02T15:15:50Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.