Enhancing Large Language Models with Pseudo- and Multisource- Knowledge
Graphs for Open-ended Question Answering
- URL: http://arxiv.org/abs/2402.09911v1
- Date: Thu, 15 Feb 2024 12:20:02 GMT
- Title: Enhancing Large Language Models with Pseudo- and Multisource- Knowledge
Graphs for Open-ended Question Answering
- Authors: Jiaxiang Liu, Tong Zhou, Yubo Chen, Kang Liu, Jun Zhao
- Abstract summary: We propose a framework that combines Pseudo-Graph Generation and Atomic Knowledge Verification.
Compared to the baseline, this approach yields a minimum improvement of 11.5 in the ROUGE-L score for open-ended questions.
- Score: 23.88063210973303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mitigating the hallucinations of Large Language Models (LLMs) and enhancing
them is a crucial task. Although some existing methods employ model
self-enhancement techniques, they fall short of effectively addressing unknown
factual hallucinations. Using Knowledge Graph (KG) enhancement approaches fails
to address the generalization across different KG sources and the enhancement
of open-ended answer questions simultaneously. To tackle these limitations,
there is a framework that combines Pseudo-Graph Generation and Atomic Knowledge
Verification proposed. The enhancement of LLM using KG in an open-ended
question-answering setting is implemented by leveraging the Pseudo-Graph
Generation. Atomic Knowledge Verification utilizes atomic-level knowledge
querying and verification to achieve generalizability under different KG
sources. Compared to the baseline, this approach yields a minimum improvement
of 11.5 in the ROUGE-L score for open-ended questions. For precise questions,
we observe a minimum accuracy improvement of 7.5. Moreover, there is also
demonstration that this framework exhibits generalizability across different KG
sources. In summary, our results pave the way for enhancing LLMs by
incorporating Pseudo- and Multisource-KGs, particularly in the context of
open-ended questions.
Related papers
- Knowledge Graph-extended Retrieval Augmented Generation for Question Answering [10.49712834719005]
This paper proposes a system that integrates Large Language Models (LLMs) and Knowledge Graphs (KGs) without requiring training.
The resulting approach can be classified as a specific form of a Retrieval Augmented Generation (RAG) with a KG.
It includes a question decomposition module to enhance multi-hop information retrieval and answerability.
arXiv Detail & Related papers (2025-04-11T18:03:02Z) - Question-Aware Knowledge Graph Prompting for Enhancing Large Language Models [51.47994645529258]
We propose Question-Aware Knowledge Graph Prompting (QAP), which incorporates question embeddings into GNN aggregation to dynamically assess KG relevance.
Experimental results demonstrate that QAP outperforms state-of-the-art methods across multiple datasets, highlighting its effectiveness.
arXiv Detail & Related papers (2025-03-30T17:09:11Z) - Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - Comprehending Knowledge Graphs with Large Language Models for Recommender Systems [13.270018897057293]
We propose a novel method called CoLaKG, which leverages large language models for knowledge-aware recommendation.
We first extract subgraphs centered on each item from the KG and convert them into textual inputs for the LLM.
The LLM then outputs its comprehension of these item-centered subgraphs, which are subsequently transformed into semantic embeddings.
arXiv Detail & Related papers (2024-10-16T04:44:34Z) - Empowering Small-Scale Knowledge Graphs: A Strategy of Leveraging General-Purpose Knowledge Graphs for Enriched Embeddings [3.7759315989669058]
We introduce a framework for enriching embeddings of small-scale domain-specific Knowledge Graphs with well-established general-purpose KGs.
Experimental evaluations demonstrate a notable enhancement, with up to a 44% increase observed in the Hits@10 metric.
This relatively unexplored research direction can catalyze more frequent incorporation of KGs in knowledge-intensive tasks.
arXiv Detail & Related papers (2024-05-17T12:46:23Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graph Large Language Model (KG-LLM) for Link Prediction [43.55117421485917]
We introduce the Knowledge Graph Large Language Model (KG-LLM), a novel framework that leverages large language models (LLMs) for knowledge graph tasks.
We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs.
To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma.
arXiv Detail & Related papers (2024-03-12T04:47:29Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Mitigating Large Language Model Hallucinations via Autonomous Knowledge
Graph-based Retrofitting [51.7049140329611]
This paper proposes Knowledge Graph-based Retrofitting (KGR) to mitigate factual hallucination during the reasoning process.
Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks.
arXiv Detail & Related papers (2023-11-22T11:08:38Z) - Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph
through AI Chain [21.27256145010061]
We propose a knowledge-guided query clarification approach for API recommendation.
We use a large language model (LLM) guided by knowledge graph (KG) to overcome out-of-vocabulary (OOV) failures.
Our approach is designed as an AI chain that consists of five steps, each handled by a separate LLM call.
arXiv Detail & Related papers (2023-09-28T03:31:01Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Collective Knowledge Graph Completion with Mutual Knowledge Distillation [11.922522192224145]
We study the problem of multi-KG completion, where we focus on maximizing the collective knowledge from different KGs.
We propose a novel method called CKGC-CKD that uses relation-aware graph convolutional network encoder models on both individual KGs and a large fused KG.
Experimental results on multilingual datasets have shown that our method outperforms all state-of-the-art models in the KGC task.
arXiv Detail & Related papers (2023-05-25T09:49:40Z) - Empowering Language Models with Knowledge Graph Reasoning for Question
Answering [117.79170629640525]
We propose knOwledge REasOning empowered Language Model (OREO-LM)
OREO-LM consists of a novel Knowledge Interaction Layer that can be flexibly plugged into existing Transformer-based LMs.
We show significant performance gain, achieving state-of-art results in the Closed-Book setting.
arXiv Detail & Related papers (2022-11-15T18:26:26Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.