Unifying Structure and Language Semantic for Efficient Contrastive
Knowledge Graph Completion with Structured Entity Anchors
- URL: http://arxiv.org/abs/2311.04250v1
- Date: Tue, 7 Nov 2023 11:17:55 GMT
- Title: Unifying Structure and Language Semantic for Efficient Contrastive
Knowledge Graph Completion with Structured Entity Anchors
- Authors: Sang-Hyun Je, Wontae Choi, Kwangjin Oh
- Abstract summary: The goal of knowledge graph completion (KGC) is to predict missing links in a KG using trained facts that are already known.
We propose a novel method to effectively unify structure information and language semantics without losing the power of inductive reasoning.
- Score: 0.3913403111891026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of knowledge graph completion (KGC) is to predict missing links in a
KG using trained facts that are already known. In recent, pre-trained language
model (PLM) based methods that utilize both textual and structural information
are emerging, but their performances lag behind state-of-the-art (SOTA)
structure-based methods or some methods lose their inductive inference
capabilities in the process of fusing structure embedding to text encoder. In
this paper, we propose a novel method to effectively unify structure
information and language semantics without losing the power of inductive
reasoning. We adopt entity anchors and these anchors and textual description of
KG elements are fed together into the PLM-based encoder to learn unified
representations. In addition, the proposed method utilizes additional random
negative samples which can be reused in the each mini-batch during contrastive
learning to learn a generalized entity representations. We verify the
effectiveness of the our proposed method through various experiments and
analysis. The experimental results on standard benchmark widely used in link
prediction task show that the proposed model outperforms existing the SOTA KGC
models. Especially, our method show the largest performance improvement on
FB15K-237, which is competitive to the SOTA of structure-based KGC methods.
Related papers
- Pointer-Guided Pre-Training: Infusing Large Language Models with Paragraph-Level Contextual Awareness [3.2925222641796554]
"pointer-guided segment ordering" (SO) is a novel pre-training technique aimed at enhancing the contextual understanding of paragraph-level text representations.
Our experiments show that pointer-guided pre-training significantly enhances the model's ability to understand complex document structures.
arXiv Detail & Related papers (2024-06-06T15:17:51Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Bidirectional Trained Tree-Structured Decoder for Handwritten
Mathematical Expression Recognition [51.66383337087724]
The Handwritten Mathematical Expression Recognition (HMER) task is a critical branch in the field of OCR.
Recent studies have demonstrated that incorporating bidirectional context information significantly improves the performance of HMER models.
We propose the Mirror-Flipped Symbol Layout Tree (MF-SLT) and Bidirectional Asynchronous Training (BAT) structure.
arXiv Detail & Related papers (2023-12-31T09:24:21Z) - MoCoSA: Momentum Contrast for Knowledge Graph Completion with
Structure-Augmented Pre-trained Language Models [11.57782182864771]
We propose Momentum Contrast for knowledge graph completion with Structure-Augmented pre-trained language models (MoCoSA)
Our approach achieves state-of-the-art performance in terms of mean reciprocal rank (MRR), with improvements of 2.5% on WN18RR and 21% on OpenBG500.
arXiv Detail & Related papers (2023-08-16T08:09:10Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal
Structured Representations [70.41385310930846]
We present an end-to-end framework Structure-CLIP to enhance multi-modal structured representations.
We use scene graphs to guide the construction of semantic negative examples, which results in an increased emphasis on learning structured representations.
A Knowledge-Enhance (KEE) is proposed to leverage SGK as input to further enhance structured representations.
arXiv Detail & Related papers (2023-05-06T03:57:05Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Language Models as Knowledge Embeddings [26.384327693266837]
We propose LMKE, which adopts Language Models to derive Knowledge Embeddings.
We formulate description-based KE learning with a contrastive learning framework to improve efficiency in training and evaluation.
arXiv Detail & Related papers (2022-06-25T10:39:12Z) - Dependency Induction Through the Lens of Visual Perception [81.91502968815746]
We propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based to jointly learn constituency-structure and dependency-structure grammars.
Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size.
arXiv Detail & Related papers (2021-09-20T18:40:37Z) - KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs [26.557447199727758]
We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
arXiv Detail & Related papers (2021-09-09T12:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.