Combining Knowledge Graphs and Large Language Models
- URL: http://arxiv.org/abs/2407.06564v1
- Date: Tue, 9 Jul 2024 05:42:53 GMT
- Title: Combining Knowledge Graphs and Large Language Models
- Authors: Amanda Kau, Xuzeng He, Aishwarya Nambissan, Aland Astudillo, Hui Yin, Amir Aryani,
- Abstract summary: Large language models (LLMs) show astonishing results in language understanding and generation.
They still show some disadvantages, such as hallucinations and lack of domain-specific knowledge.
These issues can be effectively mitigated by incorporating knowledge graphs (KGs)
This work collected 28 papers outlining methods for KG-powered LLMs, LLM-based KGs, and LLM-KG hybrid approaches.
- Score: 4.991122366385628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Natural Language Processing (NLP) has played a significant role in various Artificial Intelligence (AI) applications such as chatbots, text generation, and language translation. The emergence of large language models (LLMs) has greatly improved the performance of these applications, showing astonishing results in language understanding and generation. However, they still show some disadvantages, such as hallucinations and lack of domain-specific knowledge, that affect their performance in real-world tasks. These issues can be effectively mitigated by incorporating knowledge graphs (KGs), which organise information in structured formats that capture relationships between entities in a versatile and interpretable fashion. Likewise, the construction and validation of KGs present challenges that LLMs can help resolve. The complementary relationship between LLMs and KGs has led to a trend that combines these technologies to achieve trustworthy results. This work collected 28 papers outlining methods for KG-powered LLMs, LLM-based KGs, and LLM-KG hybrid approaches. We systematically analysed and compared these approaches to provide a comprehensive overview highlighting key trends, innovative techniques, and common challenges. This synthesis will benefit researchers new to the field and those seeking to deepen their understanding of how KGs and LLMs can be effectively combined to enhance AI applications capabilities.
Related papers
- RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - Research Trends for the Interplay between Large Language Models and Knowledge Graphs [5.364370360239422]
This survey investigates the synergistic relationship between Large Language Models (LLMs) and Knowledge Graphs (KGs)
It aims to address gaps in current research by exploring areas such as KG Question Answering, ontology generation, KG validation, and the enhancement of KG accuracy and consistency through LLMs.
arXiv Detail & Related papers (2024-06-12T13:52:38Z) - Integrating Large Language Models with Graphical Session-Based
Recommendation [8.086277931395212]
We introduce large language models with graphical Session-Based recommendation, named LLMGR.
This framework bridges the gap by harmoniously integrating LLMs with Graph Neural Networks (GNNs) for SBR tasks.
This integration seeks to leverage the complementary strengths of LLMs in natural language understanding and GNNs in relational data processing.
arXiv Detail & Related papers (2024-02-26T12:55:51Z) - Large Language Models Can Better Understand Knowledge Graphs Than We Thought [13.336418752729987]
knowledge graph (KG) embeddings with model parameters become increasingly costly.
Current prompting methods often rely on a trial-and-error approach.
We show that unordered linearized triples are more effective for LLMs' understanding of KGs compared to fluent NL text.
arXiv Detail & Related papers (2024-02-18T10:44:03Z) - An Enhanced Prompt-Based LLM Reasoning Scheme via Knowledge Graph-Integrated Collaboration [7.3636034708923255]
This study proposes a collaborative training-free reasoning scheme involving tight cooperation between Knowledge Graph (KG) and Large Language Models (LLMs)
Through such a cooperative approach, our scheme achieves more reliable knowledge-based reasoning and facilitates the tracing of the reasoning results.
arXiv Detail & Related papers (2024-02-07T15:56:17Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs
for Fact-aware Language Modeling [34.59678835272862]
ChatGPT, a representative large language model (LLM), has gained considerable attention due to its powerful emergent abilities.
This paper proposes to enhance LLMs with knowledge graph-enhanced large language models (KGLLMs)
KGLLM provides a solution to enhance LLMs' factual reasoning ability, opening up new avenues for LLM research.
arXiv Detail & Related papers (2023-06-20T12:21:06Z) - Unifying Large Language Models and Knowledge Graphs: A Roadmap [61.824618473293725]
Large language models (LLMs) are making new waves in the field of natural language processing and artificial intelligence.
Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge.
arXiv Detail & Related papers (2023-06-14T07:15:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.