Language Models as Knowledge Embeddings
- URL: http://arxiv.org/abs/2206.12617v3
- Date: Thu, 29 Jun 2023 07:11:19 GMT
- Title: Language Models as Knowledge Embeddings
- Authors: Xintao Wang, Qianyu He, Jiaqing Liang and Yanghua Xiao
- Abstract summary: We propose LMKE, which adopts Language Models to derive Knowledge Embeddings.
We formulate description-based KE learning with a contrastive learning framework to improve efficiency in training and evaluation.
- Score: 26.384327693266837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge embeddings (KE) represent a knowledge graph (KG) by embedding
entities and relations into continuous vector spaces. Existing methods are
mainly structure-based or description-based. Structure-based methods learn
representations that preserve the inherent structure of KGs. They cannot well
represent abundant long-tail entities in real-world KGs with limited structural
information. Description-based methods leverage textual information and
language models. Prior approaches in this direction barely outperform
structure-based ones, and suffer from problems like expensive negative sampling
and restrictive description demand. In this paper, we propose LMKE, which
adopts Language Models to derive Knowledge Embeddings, aiming at both enriching
representations of long-tail entities and solving problems of prior
description-based methods. We formulate description-based KE learning with a
contrastive learning framework to improve efficiency in training and
evaluation. Experimental results show that LMKE achieves state-of-the-art
performance on KE benchmarks of link prediction and triple classification,
especially for long-tail entities.
Related papers
- SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - KICGPT: Large Language Model with Knowledge in Context for Knowledge
Graph Completion [27.405080941584533]
We propose KICGPT, a framework that integrates a large language model and a triple-based KGC retriever.
It alleviates the long-tail problem without incurring additional training overhead.
Empirical results on benchmark datasets demonstrate the effectiveness of KICGPT with smaller training overhead and no finetuning.
arXiv Detail & Related papers (2024-02-04T08:01:07Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Unifying Structure and Language Semantic for Efficient Contrastive
Knowledge Graph Completion with Structured Entity Anchors [0.3913403111891026]
The goal of knowledge graph completion (KGC) is to predict missing links in a KG using trained facts that are already known.
We propose a novel method to effectively unify structure information and language semantics without losing the power of inductive reasoning.
arXiv Detail & Related papers (2023-11-07T11:17:55Z) - Making Large Language Models Perform Better in Knowledge Graph Completion [42.175953129260236]
Large language model (LLM) based knowledge graph completion (KGC) aims to predict the missing triples in the KGs with LLMs.
In this paper, we explore methods to incorporate structural information into the LLMs, with the overarching goal of facilitating structure-aware reasoning.
arXiv Detail & Related papers (2023-10-10T14:47:09Z) - MoCoSA: Momentum Contrast for Knowledge Graph Completion with
Structure-Augmented Pre-trained Language Models [11.57782182864771]
We propose Momentum Contrast for knowledge graph completion with Structure-Augmented pre-trained language models (MoCoSA)
Our approach achieves state-of-the-art performance in terms of mean reciprocal rank (MRR), with improvements of 2.5% on WN18RR and 21% on OpenBG500.
arXiv Detail & Related papers (2023-08-16T08:09:10Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.