Enhancing Large Language Models with Pseudo- and Multisource- Knowledge
Graphs for Open-ended Question Answering
- URL: http://arxiv.org/abs/2402.09911v1
- Date: Thu, 15 Feb 2024 12:20:02 GMT
- Title: Enhancing Large Language Models with Pseudo- and Multisource- Knowledge
Graphs for Open-ended Question Answering
- Authors: Jiaxiang Liu, Tong Zhou, Yubo Chen, Kang Liu, Jun Zhao
- Abstract summary: We propose a framework that combines Pseudo-Graph Generation and Atomic Knowledge Verification.
Compared to the baseline, this approach yields a minimum improvement of 11.5 in the ROUGE-L score for open-ended questions.
- Score: 23.88063210973303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mitigating the hallucinations of Large Language Models (LLMs) and enhancing
them is a crucial task. Although some existing methods employ model
self-enhancement techniques, they fall short of effectively addressing unknown
factual hallucinations. Using Knowledge Graph (KG) enhancement approaches fails
to address the generalization across different KG sources and the enhancement
of open-ended answer questions simultaneously. To tackle these limitations,
there is a framework that combines Pseudo-Graph Generation and Atomic Knowledge
Verification proposed. The enhancement of LLM using KG in an open-ended
question-answering setting is implemented by leveraging the Pseudo-Graph
Generation. Atomic Knowledge Verification utilizes atomic-level knowledge
querying and verification to achieve generalizability under different KG
sources. Compared to the baseline, this approach yields a minimum improvement
of 11.5 in the ROUGE-L score for open-ended questions. For precise questions,
we observe a minimum accuracy improvement of 7.5. Moreover, there is also
demonstration that this framework exhibits generalizability across different KG
sources. In summary, our results pave the way for enhancing LLMs by
incorporating Pseudo- and Multisource-KGs, particularly in the context of
open-ended questions.
Related papers
- GLTW: Joint Improved Graph Transformer and LLM via Three-Word Language for Knowledge Graph Completion [52.026016846945424]
We propose a new method called GLTW, which encodes the structural information of KGs and merges it with Large Language Models.
Specifically, we introduce an improved Graph Transformer (iGT) that effectively encodes subgraphs with both local and global structural information.
Also, we develop a subgraph-based multi-classification training objective, using all entities within KG as classification objects, to boost learning efficiency.
arXiv Detail & Related papers (2025-02-17T06:02:59Z) - Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study Over Open-ended Question Answering [30.12049172634714]
This study explores whether Knowledge Graphs can make Large Language Models (LLMs) more trustworthy in an open-ended setting.
OKGQA is a benchmark specifically designed to assess LLMs enhanced with Knowledge Graphs under open-ended, real-world question answering scenarios.
OKGQA-P is a benchmark variant to assess model performance when the semantics and structure of KGs are deliberately perturbed and contaminated.
arXiv Detail & Related papers (2024-10-10T16:29:21Z) - Empowering Small-Scale Knowledge Graphs: A Strategy of Leveraging General-Purpose Knowledge Graphs for Enriched Embeddings [3.7759315989669058]
We introduce a framework for enriching embeddings of small-scale domain-specific Knowledge Graphs with well-established general-purpose KGs.
Experimental evaluations demonstrate a notable enhancement, with up to a 44% increase observed in the Hits@10 metric.
This relatively unexplored research direction can catalyze more frequent incorporation of KGs in knowledge-intensive tasks.
arXiv Detail & Related papers (2024-05-17T12:46:23Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graph Large Language Model (KG-LLM) for Link Prediction [43.55117421485917]
We introduce the Knowledge Graph Large Language Model (KG-LLM), a novel framework that leverages large language models (LLMs) for knowledge graph tasks.
We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs.
To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma.
arXiv Detail & Related papers (2024-03-12T04:47:29Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Mitigating Large Language Model Hallucinations via Autonomous Knowledge
Graph-based Retrofitting [51.7049140329611]
This paper proposes Knowledge Graph-based Retrofitting (KGR) to mitigate factual hallucination during the reasoning process.
Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks.
arXiv Detail & Related papers (2023-11-22T11:08:38Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Empowering Language Models with Knowledge Graph Reasoning for Question
Answering [117.79170629640525]
We propose knOwledge REasOning empowered Language Model (OREO-LM)
OREO-LM consists of a novel Knowledge Interaction Layer that can be flexibly plugged into existing Transformer-based LMs.
We show significant performance gain, achieving state-of-art results in the Closed-Book setting.
arXiv Detail & Related papers (2022-11-15T18:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.