Unlock the Power of Frozen LLMs in Knowledge Graph Completion
- URL: http://arxiv.org/abs/2408.06787v2
- Date: Wed, 18 Sep 2024 07:12:28 GMT
- Title: Unlock the Power of Frozen LLMs in Knowledge Graph Completion
- Authors: Bo Xue, Yi Xu, Yunchong Song, Yiming Pang, Yuyang Ren, Jiaxin Ding, Luoyi Fu, Xinbing Wang,
- Abstract summary: Large Language Models (LLMs) learn extensive knowledge from large corpora with powerful context modeling.
We capture the context-aware hidden states of knowledge triples by employing prompts to stimulate the intermediate layers of LLMs.
We then train a data-efficient classifier on these hidden states to harness the inherent capabilities of frozen LLMs in KGC.
- Score: 45.80451763142032
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Traditional knowledge graph completion (KGC) methods rely solely on structural information, struggling with the inherent sparsity of knowledge graphs (KGs). Large Language Models (LLMs) learn extensive knowledge from large corpora with powerful context modeling, making them promising for mitigating the limitations of previous methods. Directly fine-tuning LLMs offers great capability but comes at the cost of huge time and memory consumption, while utilizing frozen LLMs yields suboptimal results.In this work, we aim to leverage LLMs for KGC effectively and efficiently. We capture the context-aware hidden states of knowledge triples by employing prompts to stimulate the intermediate layers of LLMs. We then train a data-efficient classifier on these hidden states to harness the inherent capabilities of frozen LLMs in KGC. Additionally, to reduce ambiguity and enrich knowledge representation, we generate detailed entity descriptions through subgraph sampling on KGs. Extensive experiments on standard benchmarks demonstrate the efficiency and effectiveness of our approach. We outperform traditional KGC methods across most datasets and, notably, achieve classification performance comparable to fine-tuned LLMs while enhancing GPU memory efficiency by $188\times$ and accelerating training and inference by $13.48\times$.
Related papers
- Traditional Methods Outperform Generative LLMs at Forecasting Credit Ratings [17.109522466982476]
Large Language Models (LLMs) have been shown to perform well for many downstream tasks.
This paper investigates how well LLMs perform in the task of forecasting corporate credit ratings.
arXiv Detail & Related papers (2024-07-24T20:30:55Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Large Language Models as Reliable Knowledge Bases? [60.25969380388974]
Large Language Models (LLMs) can be viewed as potential knowledge bases (KBs)
This study defines criteria that a reliable LLM-as-KB should meet, focusing on factuality and consistency.
strategies like ICL and fine-tuning are unsuccessful at making LLMs better KBs.
arXiv Detail & Related papers (2024-07-18T15:20:18Z) - On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models [33.08049246893537]
Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of large language models (LLMs)
We propose a simple but effective long-tail knowledge detection method for LLMs.
Our method achieves over 4x speedup in average inference time and consistent performance improvement in downstream tasks.
arXiv Detail & Related papers (2024-06-24T07:17:59Z) - Knowledge Graph Tuning: Real-time Large Language Model Personalization based on Human Feedback [5.778012023739487]
We propose Knowledge Graph Tuning (KGT) to personalize large language models (LLMs)
KGT extracts personalized factual knowledge triples from users' queries and feedback and optimize KGs without modifying the LLM parameters.
Experiments with state-of-the-art LLMs, including GPT-2, Llama2, and Llama3, show that KGT significantly improves personalization performance while reducing latency and GPU memory costs.
arXiv Detail & Related papers (2024-05-30T04:57:03Z) - Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts [50.06633829833144]
Large Language Models (LLMs) are effective in performing various NLP tasks, but struggle to handle tasks that require extensive, real-world knowledge.
We propose a benchmark that requires knowledge of long-tail facts for answering the involved questions.
Our experiments show that LLMs alone struggle with answering these questions, especially when the long-tail level is high or rich knowledge is required.
arXiv Detail & Related papers (2024-05-10T15:10:20Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Large Language Models Can Better Understand Knowledge Graphs Than We Thought [13.336418752729987]
knowledge graph (KG) embeddings with model parameters become increasingly costly.
Current prompting methods often rely on a trial-and-error approach.
We show that unordered linearized triples are more effective for LLMs' understanding of KGs compared to fluent NL text.
arXiv Detail & Related papers (2024-02-18T10:44:03Z) - Chain of History: Learning and Forecasting with LLMs for Temporal
Knowledge Graph Completion [24.545917737620197]
Temporal Knowledge Graph Completion (TKGC) is a complex task involving the prediction of missing event links at future timestamps.
This paper aims to provide a comprehensive perspective on harnessing the advantages of Large Language Models for reasoning in temporal knowledge graphs.
arXiv Detail & Related papers (2024-01-11T17:42:47Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.