Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented Generation
- URL: http://arxiv.org/abs/2407.10805v5
- Date: Tue, 8 Oct 2024 13:19:41 GMT
- Title: Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented Generation
- Authors: Shengjie Ma, Chengjin Xu, Xuhui Jiang, Muzhi Li, Huaren Qu, Cehao Yang, Jiaxin Mao, Jian Guo,
- Abstract summary: Think-on-Graph 2.0 (ToG-2) is a hybrid RAG framework that iteratively retrieves information from both unstructured and structured knowledge sources.
ToG-2 alternates between graph retrieval and context retrieval to search for in-depth clues relevant to the question.
Extensive experiments show that ToG-2 achieves state-of-the-art (SOTA) performance on 6 out of 7 knowledge-intensive datasets with GPT-3.5.
- Score: 14.448198170932226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retrieval-augmented generation (RAG) has enhanced large language models (LLMs) by using knowledge retrieval to address knowledge gaps. However, existing RAG approaches often fail to ensure the depth and completeness of the information retrieved, which is essential for complex reasoning tasks. In this work, we present Think-on-Graph 2.0 (ToG-2), a hybrid RAG framework that iteratively retrieves information from both unstructured and structured knowledge sources in a tightly integrated manner. Specifically, ToG-2 leverages knowledge graphs (KGs) to connect documents via entities, facilitating deep and knowledge-guided context retrieval. Simultaneously, it uses documents as entity contexts to enable precise and efficient graph retrieval. ToG-2 alternates between graph retrieval and context retrieval to search for in-depth clues relevant to the question, enabling LLMs to generate accurate answers. We conduct a series of experiments to demonstrate the following advantages of ToG-2: (1) ToG-2 tightly integrates context retrieval and graph retrieval, enhancing context retrieval through the KG while enabling reliable graph retrieval based on contexts; (2) it achieves deep and faithful reasoning in LLMs through an iterative knowledge retrieval process that integrates contexts and the KG; and (3) ToG-2 is training-free and compatible with various LLMs as a plug-and-play solution. Extensive experiments show that ToG-2 achieves state-of-the-art (SOTA) performance on 6 out of 7 knowledge-intensive datasets with GPT-3.5, and can elevate the performance of smaller models (e.g., LLAMA-2-13B) to the level of GPT-3.5's direct reasoning.
Related papers
- GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation [84.41557981816077]
We introduce GFM-RAG, a novel graph foundation model (GFM) for retrieval augmented generation.
GFM-RAG is powered by an innovative graph neural network that reasons over graph structure to capture complex query-knowledge relationships.
It achieves state-of-the-art performance while maintaining efficiency and alignment with neural scaling laws.
arXiv Detail & Related papers (2025-02-03T07:04:29Z) - CG-RAG: Research Question Answering by Citation Graph Retrieval-Augmented LLMs [9.718354494802002]
Contextualized Graph Retrieval-Augmented Generation (CG-RAG) is a novel framework that integrates sparse and dense retrieval signals within graph structures.
First, we propose a contextual graph representation for citation graphs, effectively capturing both explicit and implicit connections within and across documents.
Second, we introduce Lexical-Semantic Graph Retrieval (LeSeGR), which seamlessly integrates sparse and dense retrieval signals with graph encoding.
Third, we present a context-aware generation strategy that utilizes the retrieved graph-structured information to generate precise and contextually enriched responses.
arXiv Detail & Related papers (2025-01-25T04:18:08Z) - Harnessing Large Language Models for Knowledge Graph Question Answering via Adaptive Multi-Aspect Retrieval-Augmentation [81.18701211912779]
We introduce an Adaptive Multi-Aspect Retrieval-augmented over KGs (Amar) framework.
This method retrieves knowledge including entities, relations, and subgraphs, and converts each piece of retrieved text into prompt embeddings.
Our method has achieved state-of-the-art performance on two common datasets.
arXiv Detail & Related papers (2024-12-24T16:38:04Z) - SimGRAG: Leveraging Similar Subgraphs for Knowledge Graphs Driven Retrieval-Augmented Generation [6.568733377722896]
We propose a novel Similar Graph Enhanced Retrieval-Augmented Generation (SimGRAG) method.
It effectively addresses the challenge of aligning query texts and knowledge graphs.
SimGRAG outperforms state-of-the-art KG-driven RAG methods in question answering and fact verification.
arXiv Detail & Related papers (2024-12-17T15:40:08Z) - G-RAG: Knowledge Expansion in Material Science [0.0]
Graph RAG integrates graph databases to enhance the retrieval process.
We implement an agent-based parsing technique to achieve a more detailed representation of the documents.
arXiv Detail & Related papers (2024-11-21T21:22:58Z) - Don't Forget to Connect! Improving RAG with Graph-based Reranking [26.433218248189867]
We introduce G-RAG, a reranker based on graph neural networks (GNNs) between the retriever and reader in RAG.
Our method combines both connections between documents and semantic information (via Abstract Representation Meaning graphs) to provide a context-informed ranker for RAG.
G-RAG outperforms state-of-the-art approaches while having smaller computational footprint.
arXiv Detail & Related papers (2024-05-28T17:56:46Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph [63.81641578763094]
adapter-style efficient transfer learning (ETL) has shown excellent performance in the tuning of vision-language models (VLMs)
We propose an effective adapter-style tuning strategy, dubbed GraphAdapter, which performs the textual adapter by explicitly modeling the dual-modality structure knowledge.
In particular, the dual knowledge graph is established with two sub-graphs, i.e., a textual knowledge sub-graph, and a visual knowledge sub-graph, where the nodes and edges represent the semantics/classes and their correlations in two modalities, respectively.
arXiv Detail & Related papers (2023-09-24T12:56:40Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z) - Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language
Models [53.09723678623779]
We propose TAGREAL to automatically generate quality query prompts and retrieve support information from large text corpora.
The results show that TAGREAL achieves state-of-the-art performance on two benchmark datasets.
We find that TAGREAL has superb performance even with limited training data, outperforming existing embedding-based, graph-based, and PLM-based methods.
arXiv Detail & Related papers (2023-05-24T22:09:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.