Clue-Guided Path Exploration: Optimizing Knowledge Graph Retrieval with Large Language Models to Address the Information Black Box Challenge
- URL: http://arxiv.org/abs/2401.13444v2
- Date: Mon, 19 Aug 2024 03:25:52 GMT
- Title: Clue-Guided Path Exploration: Optimizing Knowledge Graph Retrieval with Large Language Models to Address the Information Black Box Challenge
- Authors: Dehao Tao, Feng Huang, Congqi Wang, Yongfeng Huang, Minghu Jiang,
- Abstract summary: We propose a Clue-Guided Path Exploration (CGPE) framework to optimize knowledge retrieval based on large language models.
Experiments on open-source datasets reveal that CGPE outperforms previous methods and is highly applicable to LLMs with fewer parameters.
- Score: 19.40489486138002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent times, large language models (LLMs) have showcased remarkable capabilities. However, updating their knowledge poses challenges, potentially leading to inaccuracies when confronted with unfamiliar queries. To address this issue, integrating external knowledge bases such as knowledge graphs with large language models is a viable approach. The key challenge lies in extracting the required knowledge from knowledge graphs based on natural language, demanding high semantic understanding. Therefore, researchers are considering leveraging large language models directly for knowledge retrieval from these graphs. Current efforts typically rely on the comprehensive problem-solving capabilities of large language models. We argue that a problem we term the 'information black box' can significantly impact the practical effectiveness of such methods. Moreover, this kind of methods is less effective for scenarios where the questions are unfamiliar to the large language models. In this paper, we propose a Clue-Guided Path Exploration (CGPE) framework to optimize knowledge retrieval based on large language models. By addressing the 'information black box' issue and employing single-task approaches instead of complex tasks, we have enhanced the accuracy and efficiency of using large language models for retrieving knowledge graphs. Experiments on open-source datasets reveal that CGPE outperforms previous methods and is highly applicable to LLMs with fewer parameters. In some instances, even ChatGLM3, with its 6 billion parameters, can rival the performance of GPT-4. Furthermore, the results indicate a minimal invocation frequency of CGPE on LLMs, suggesting reduced computational overhead. For organizations and individuals facing constraints in computational resources, our research offers significant practical value.
Related papers
- KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [75.78948575957081]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Exploring Knowledge Boundaries in Large Language Models for Retrieval Judgment [56.87031484108484]
Large Language Models (LLMs) are increasingly recognized for their practical applications.
Retrieval-Augmented Generation (RAG) tackles this challenge and has shown a significant impact on LLMs.
By minimizing retrieval requests that yield neutral or harmful results, we can effectively reduce both time and computational costs.
arXiv Detail & Related papers (2024-11-09T15:12:28Z) - Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering [0.0]
Large Language Models (LLM) and Knowledge Graphs (KG) are combined to improve the accuracy and reliability of question-answering systems.
Our method incorporates a query checker that ensures the syntactical and semantic validity of LLM-generated queries.
To make this approach accessible, a user-friendly web-based interface has been developed.
arXiv Detail & Related papers (2024-09-06T10:49:46Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Infusing Knowledge into Large Language Models with Contextual Prompts [5.865016596356753]
We propose a simple yet generalisable approach for knowledge infusion by generating prompts from the context in the input text.
Our experiments show the effectiveness of our approach which we evaluate by probing the fine-tuned LLMs.
arXiv Detail & Related papers (2024-03-03T11:19:26Z) - Query of CC: Unearthing Large Scale Domain-Specific Knowledge from
Public Corpora [104.16648246740543]
We propose an efficient data collection method based on large language models.
The method bootstraps seed information through a large language model and retrieves related data from public corpora.
It not only collects knowledge-related data for specific domains but unearths the data with potential reasoning procedures.
arXiv Detail & Related papers (2024-01-26T03:38:23Z) - Self-Knowledge Guided Retrieval Augmentation for Large Language Models [59.771098292611846]
Large language models (LLMs) have shown superior performance without task-specific fine-tuning.
Retrieval-based methods can offer non-parametric world knowledge and improve the performance on tasks such as question answering.
Self-Knowledge guided Retrieval augmentation (SKR) is a simple yet effective method which can let LLMs refer to the questions they have previously encountered.
arXiv Detail & Related papers (2023-10-08T04:22:33Z) - Graph Neural Prompting with Large Language Models [32.97391910476073]
Graph Neural Prompting (GNP) is a novel plug-and-play method to assist pre-trained language models in learning beneficial knowledge from knowledge graphs.
Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks.
arXiv Detail & Related papers (2023-09-27T06:33:29Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Large Language Models Can Be Easily Distracted by Irrelevant Context [29.315230178997002]
We investigate how the model problem-solving accuracy can be influenced by irrelevant context.
We use benchmark to measure the distractibility of cutting-edge prompting techniques for large language models.
arXiv Detail & Related papers (2023-01-31T20:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.