MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large
Language Models
- URL: http://arxiv.org/abs/2308.09729v5
- Date: Sat, 2 Mar 2024 15:02:33 GMT
- Title: MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large
Language Models
- Authors: Yilin Wen, Zifeng Wang, Jimeng Sun
- Abstract summary: Large language models (LLMs) have achieved remarkable performance in natural language understanding and generation tasks.
However, they often suffer from limitations such as difficulty in incorporating new knowledge, generating hallucinations, and explaining their reasoning process.
We propose a novel prompting pipeline, named method, that leverages knowledge graphs (KGs) to enhance LLMs' inference and transparency.
- Score: 34.43660759521586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have achieved remarkable performance in natural
language understanding and generation tasks. However, they often suffer from
limitations such as difficulty in incorporating new knowledge, generating
hallucinations, and explaining their reasoning process. To address these
challenges, we propose a novel prompting pipeline, named \method, that
leverages knowledge graphs (KGs) to enhance LLMs' inference and transparency.
Our method enables LLMs to comprehend KG inputs and infer with a combination of
implicit and external knowledge. Moreover, our method elicits the mind map of
LLMs, which reveals their reasoning pathways based on the ontology of
knowledge. We evaluate our method on diverse question \& answering tasks,
especially in medical domains, and show significant improvements over
baselines. We also introduce a new hallucination evaluation benchmark and
analyze the effects of different components of our method. Our results
demonstrate the effectiveness and robustness of our method in merging knowledge
from LLMs and KGs for combined inference. To reproduce our results and extend
the framework further, we make our codebase available at
https://github.com/wyl-willing/MindMap.
Related papers
- Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective [5.769786334333616]
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, and others.
They face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses.
This paper discusses these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations.
arXiv Detail & Related papers (2024-11-21T16:09:05Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Large Language Models Can Better Understand Knowledge Graphs Than We Thought [13.336418752729987]
knowledge graph (KG) embeddings with model parameters become increasingly costly.
Current prompting methods often rely on a trial-and-error approach.
We show that unordered linearized triples are more effective for LLMs' understanding of KGs compared to fluent NL text.
arXiv Detail & Related papers (2024-02-18T10:44:03Z) - A Comprehensive Study of Knowledge Editing for Large Language Models [82.65729336401027]
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication.
This paper defines the knowledge editing problem and provides a comprehensive review of cutting-edge approaches.
We introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches.
arXiv Detail & Related papers (2024-01-02T16:54:58Z) - KnowledgeNavigator: Leveraging Large Language Models for Enhanced
Reasoning over Knowledge Graph [11.808990571175269]
Large language model (LLM) has achieved outstanding performance on various downstream tasks with its powerful natural language understanding and zero-shot capability, but LLM still suffers from knowledge limitation.
We propose a novel framework KnowledgeNavigator to address these challenges by efficiently and accurately retrieving external knowledge from knowledge graph.
We evaluate KnowledgeNavigator on multiple public KGQA benchmarks, the experiments show the framework has great effectiveness and generalization.
arXiv Detail & Related papers (2023-12-26T04:22:56Z) - On Exploring the Reasoning Capability of Large Language Models with
Knowledge Graphs [11.878708460150726]
Two research questions are formulated to investigate the accuracy of LLMs in recalling information from pre-training knowledge graphs.
To address these questions, we employ LLMs to perform four distinct knowledge graph reasoning tasks.
Our experimental results demonstrate that LLMs can successfully tackle both simple and complex knowledge graph reasoning tasks from their own memory.
arXiv Detail & Related papers (2023-12-01T05:08:47Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z) - Unifying Large Language Models and Knowledge Graphs: A Roadmap [61.824618473293725]
Large language models (LLMs) are making new waves in the field of natural language processing and artificial intelligence.
Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge.
arXiv Detail & Related papers (2023-06-14T07:15:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.