Exploring Large Language Models for Knowledge Graph Completion
- URL: http://arxiv.org/abs/2308.13916v4
- Date: Sun, 18 Feb 2024 07:35:34 GMT
- Title: Exploring Large Language Models for Knowledge Graph Completion
- Authors: Liang Yao, Jiazhen Peng, Chengsheng Mao, Yuan Luo
- Abstract summary: We consider triples in knowledge graphs as text sequences and introduce an innovative framework called Knowledge Graph LLM.
Our technique employs entity and relation descriptions of a triple as prompts and utilizes the response for predictions.
Experiments on various benchmark knowledge graphs demonstrate that our method attains state-of-the-art performance in tasks such as triple classification and relation prediction.
- Score: 17.139056629060626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs play a vital role in numerous artificial intelligence tasks,
yet they frequently face the issue of incompleteness. In this study, we explore
utilizing Large Language Models (LLM) for knowledge graph completion. We
consider triples in knowledge graphs as text sequences and introduce an
innovative framework called Knowledge Graph LLM (KG-LLM) to model these
triples. Our technique employs entity and relation descriptions of a triple as
prompts and utilizes the response for predictions. Experiments on various
benchmark knowledge graphs demonstrate that our method attains state-of-the-art
performance in tasks such as triple classification and relation prediction. We
also find that fine-tuning relatively smaller models (e.g., LLaMA-7B,
ChatGLM-6B) outperforms recent ChatGPT and GPT-4.
Related papers
- Assessing LLMs Suitability for Knowledge Graph Completion [0.0]
Large Language Models (LLMs) can be used to solve tasks related to Knowledge Graphs.
LLMs are known to hallucinate answers, or output results in a non-deterministic manner.
arXiv Detail & Related papers (2024-05-27T15:04:50Z) - Relations Prediction for Knowledge Graph Completion using Large Language Models [0.0]
We make use of the knowledge graph node names to fine-tune a large language model for the relation prediction task.
Our experiments show that we accomplish new scores on a widely used knowledge graph benchmark.
arXiv Detail & Related papers (2024-05-04T19:04:51Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Narrating Causal Graphs with Large Language Models [1.437446768735628]
This work explores the capability of large pretrained language models to generate text from causal graphs.
The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing.
Results suggest users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples.
arXiv Detail & Related papers (2024-03-11T19:19:59Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic,
and Multimodal [57.8455911689554]
Knowledge graph reasoning (KGR) aims to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs)
It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc.
arXiv Detail & Related papers (2022-12-12T08:40:04Z) - Tucker decomposition-based Temporal Knowledge Graph Completion [35.56360622521721]
We build a new tensor decomposition model for temporal knowledge graphs completion inspired by the Tucker decomposition of order 4 tensor.
We demonstrate that the proposed model is fully expressive and report state-of-the-art results for several public benchmarks.
arXiv Detail & Related papers (2020-11-16T07:05:52Z) - Investigating Pretrained Language Models for Graph-to-Text Generation [55.55151069694146]
Graph-to-text generation aims to generate fluent texts from graph-based data.
We present a study across three graph domains: meaning representations, Wikipedia knowledge graphs (KGs) and scientific KGs.
We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.
arXiv Detail & Related papers (2020-07-16T16:05:34Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.