Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering
- URL: http://arxiv.org/abs/2409.04181v2
- Date: Thu, 31 Oct 2024 11:01:16 GMT
- Title: Combining LLMs and Knowledge Graphs to Reduce Hallucinations in Question Answering
- Authors: Larissa Pusch, Tim O. F. Conrad,
- Abstract summary: Large Language Models (LLM) and Knowledge Graphs (KG) are combined to improve the accuracy and reliability of question-answering systems.
Our method incorporates a query checker that ensures the syntactical and semantic validity of LLM-generated queries.
To make this approach accessible, a user-friendly web-based interface has been developed.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Advancements in natural language processing have revolutionized the way we can interact with digital information systems, such as databases, making them more accessible. However, challenges persist, especially when accuracy is critical, as in the biomedical domain. A key issue is the hallucination problem, where models generate information unsupported by the underlying data, potentially leading to dangerous misinformation. This paper presents a novel approach designed to bridge this gap by combining Large Language Models (LLM) and Knowledge Graphs (KG) to improve the accuracy and reliability of question-answering systems, on the example of a biomedical KG. Built on the LangChain framework, our method incorporates a query checker that ensures the syntactical and semantic validity of LLM-generated queries, which are then used to extract information from a Knowledge Graph, substantially reducing errors like hallucinations. We evaluated the overall performance using a new benchmark dataset of 50 biomedical questions, testing several LLMs, including GPT-4 Turbo and llama3:70b. Our results indicate that while GPT-4 Turbo outperforms other models in generating accurate queries, open-source models like llama3:70b show promise with appropriate prompt engineering. To make this approach accessible, a user-friendly web-based interface has been developed, allowing users to input natural language queries, view generated and corrected Cypher queries, and verify the resulting paths for accuracy. Overall, this hybrid approach effectively addresses common issues such as data gaps and hallucinations, offering a reliable and intuitive solution for question answering systems. The source code for generating the results of this paper and for the user-interface can be found in our Git repository: https://git.zib.de/lpusch/cyphergenkg-gui
Related papers
- Towards Evaluating Large Language Models for Graph Query Generation [49.49881799107061]
Large Language Models (LLMs) are revolutionizing the landscape of Generative Artificial Intelligence (GenAI)
This paper presents a comparative study addressing the challenge of generating queries a powerful language for interacting with graph databases using open-access LLMs.
Our empirical analysis of query generation accuracy reveals that Claude Sonnet 3.5 outperforms its counterparts in this specific domain.
arXiv Detail & Related papers (2024-11-13T09:11:56Z) - Context-Augmented Code Generation Using Programming Knowledge Graphs [0.0]
Large Language Models (LLMs) and Code-LLMs (CLLMs) frequently face difficulties when dealing with challenging and complex problems.
We present a novel framework that leverages a Programming Knowledge Graph (PKG) to semantically represent and retrieve code.
arXiv Detail & Related papers (2024-10-09T16:35:41Z) - LLM-based SPARQL Query Generation from Natural Language over Federated Knowledge Graphs [0.0]
We introduce a Retrieval-Augmented Generation (RAG) system for translating user questions into accurate SPARQL queries over bioinformatics knowledge graphs (KGs)
To enhance accuracy and reduce hallucinations in query generation, our system utilise metadata from the KGs, including query examples and schema information, and incorporates a validation step to correct generated queries.
The system is available online at chat.expasy.org.
arXiv Detail & Related papers (2024-10-08T14:09:12Z) - Debate on Graph: a Flexible and Reliable Reasoning Framework for Large Language Models [33.662269036173456]
Large Language Models (LLMs) may suffer from hallucinations in real-world applications due to the lack of relevant knowledge.
Knowledge Graph Question Answering (KGQA) serves as a critical touchstone for the integration.
We propose an interactive KGQA framework that leverages the interactive learning capabilities of LLMs to perform reasoning and Debating over Graphs (DoG)
arXiv Detail & Related papers (2024-09-05T01:11:58Z) - Fact Finder -- Enhancing Domain Expertise of Large Language Models by Incorporating Knowledge Graphs [2.7386111894524]
We introduce a hybrid system that augments Large Language Models with domain-specific knowledge graphs (KGs)
We evaluate our system on a curated dataset of 69 samples, achieving a precision of 78% in retrieving correct KG nodes.
Our findings indicate that the hybrid system surpasses a standalone LLM in accuracy and completeness.
arXiv Detail & Related papers (2024-08-06T07:45:05Z) - Integrating Large Language Models with Graph-based Reasoning for Conversational Question Answering [58.17090503446995]
We focus on a conversational question answering task which combines the challenges of understanding questions in context and reasoning over evidence gathered from heterogeneous sources like text, knowledge graphs, tables, and infoboxes.
Our method utilizes a graph structured representation to aggregate information about a question and its context.
arXiv Detail & Related papers (2024-06-14T13:28:03Z) - Clue-Guided Path Exploration: Optimizing Knowledge Graph Retrieval with Large Language Models to Address the Information Black Box Challenge [19.40489486138002]
We propose a Clue-Guided Path Exploration (CGPE) framework to optimize knowledge retrieval based on large language models.
Experiments on open-source datasets reveal that CGPE outperforms previous methods and is highly applicable to LLMs with fewer parameters.
arXiv Detail & Related papers (2024-01-24T13:36:50Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for
Generative Large Language Models [55.60306377044225]
"SelfCheckGPT" is a simple sampling-based approach to fact-check the responses of black-box models.
We investigate this approach by using GPT-3 to generate passages about individuals from the WikiBio dataset.
arXiv Detail & Related papers (2023-03-15T19:31:21Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z) - Explaining Patterns in Data with Language Models via Interpretable
Autoprompting [143.4162028260874]
We introduce interpretable autoprompting (iPrompt), an algorithm that generates a natural-language string explaining the data.
iPrompt can yield meaningful insights by accurately finding groundtruth dataset descriptions.
Experiments with an fMRI dataset show the potential for iPrompt to aid in scientific discovery.
arXiv Detail & Related papers (2022-10-04T18:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.