MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large
Language Models
- URL: http://arxiv.org/abs/2308.09729v5
- Date: Sat, 2 Mar 2024 15:02:33 GMT
- Title: MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large
Language Models
- Authors: Yilin Wen, Zifeng Wang, Jimeng Sun
- Abstract summary: Large language models (LLMs) have achieved remarkable performance in natural language understanding and generation tasks.
However, they often suffer from limitations such as difficulty in incorporating new knowledge, generating hallucinations, and explaining their reasoning process.
We propose a novel prompting pipeline, named method, that leverages knowledge graphs (KGs) to enhance LLMs' inference and transparency.
- Score: 34.43660759521586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have achieved remarkable performance in natural
language understanding and generation tasks. However, they often suffer from
limitations such as difficulty in incorporating new knowledge, generating
hallucinations, and explaining their reasoning process. To address these
challenges, we propose a novel prompting pipeline, named \method, that
leverages knowledge graphs (KGs) to enhance LLMs' inference and transparency.
Our method enables LLMs to comprehend KG inputs and infer with a combination of
implicit and external knowledge. Moreover, our method elicits the mind map of
LLMs, which reveals their reasoning pathways based on the ontology of
knowledge. We evaluate our method on diverse question \& answering tasks,
especially in medical domains, and show significant improvements over
baselines. We also introduce a new hallucination evaluation benchmark and
analyze the effects of different components of our method. Our results
demonstrate the effectiveness and robustness of our method in merging knowledge
from LLMs and KGs for combined inference. To reproduce our results and extend
the framework further, we make our codebase available at
https://github.com/wyl-willing/MindMap.
Related papers
- Refine Knowledge of Large Language Models via Adaptive Contrastive Learning [54.61213933999464]
A mainstream category of methods is to reduce hallucinations by optimizing the knowledge representation of Large Language Models.
We believe that the process of models refining knowledge can greatly benefit from the way humans learn.
In our work, by imitating the human learning process, we design an Adaptive Contrastive Learning strategy.
arXiv Detail & Related papers (2025-02-11T02:19:13Z) - Aggregate and conquer: detecting and steering LLM concepts by combining nonlinear predictors over multiple layers [16.303681959333883]
We give a general method for detecting semantic concepts in the internal activations of Large Language Models.
We show that our methodology can be easily adapted to steer LLMs toward desirable outputs.
We highlight the generality of our approach by steering LLMs towards new concepts that, to the best of our knowledge, have not been previously considered.
arXiv Detail & Related papers (2025-02-06T01:41:48Z) - Harnessing Large Language Models for Knowledge Graph Question Answering via Adaptive Multi-Aspect Retrieval-Augmentation [81.18701211912779]
We introduce an Adaptive Multi-Aspect Retrieval-augmented over KGs (Amar) framework.
This method retrieves knowledge including entities, relations, and subgraphs, and converts each piece of retrieved text into prompt embeddings.
Our method has achieved state-of-the-art performance on two common datasets.
arXiv Detail & Related papers (2024-12-24T16:38:04Z) - Thinking with Knowledge Graphs: Enhancing LLM Reasoning Through Structured Data [0.9284740716447338]
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation.
Recent research has shown promising results in leveraging knowledge graphs (KGs) to enhance LLM performance.
We have developed different techniques that tightly integrate KG structures and semantics into LLM representations.
arXiv Detail & Related papers (2024-12-14T02:51:47Z) - KaLM: Knowledge-aligned Autoregressive Language Modeling via Dual-view Knowledge Graph Contrastive Learning [74.21524111840652]
This paper proposes textbfKaLM, a textitKnowledge-aligned Language Modeling approach.
It fine-tunes autoregressive large language models to align with KG knowledge via the joint objective of explicit knowledge alignment and implicit knowledge alignment.
Notably, our method achieves a significant performance boost in evaluations of knowledge-driven tasks.
arXiv Detail & Related papers (2024-12-06T11:08:24Z) - Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective [5.769786334333616]
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, and others.
They face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses.
This paper discusses these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations.
arXiv Detail & Related papers (2024-11-21T16:09:05Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - KnowledgeNavigator: Leveraging Large Language Models for Enhanced
Reasoning over Knowledge Graph [11.808990571175269]
Large language model (LLM) has achieved outstanding performance on various downstream tasks with its powerful natural language understanding and zero-shot capability, but LLM still suffers from knowledge limitation.
We propose a novel framework KnowledgeNavigator to address these challenges by efficiently and accurately retrieving external knowledge from knowledge graph.
We evaluate KnowledgeNavigator on multiple public KGQA benchmarks, the experiments show the framework has great effectiveness and generalization.
arXiv Detail & Related papers (2023-12-26T04:22:56Z) - On Exploring the Reasoning Capability of Large Language Models with
Knowledge Graphs [11.878708460150726]
Two research questions are formulated to investigate the accuracy of LLMs in recalling information from pre-training knowledge graphs.
To address these questions, we employ LLMs to perform four distinct knowledge graph reasoning tasks.
Our experimental results demonstrate that LLMs can successfully tackle both simple and complex knowledge graph reasoning tasks from their own memory.
arXiv Detail & Related papers (2023-12-01T05:08:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.