TOBUGraph: Knowledge Graph-Based Retrieval for Enhanced LLM Performance Beyond RAG
- URL: http://arxiv.org/abs/2412.05447v2
- Date: Tue, 01 Apr 2025 14:03:15 GMT
- Title: TOBUGraph: Knowledge Graph-Based Retrieval for Enhanced LLM Performance Beyond RAG
- Authors: Savini Kashmira, Jayanaka L. Dantanarayana, Joshua Brodsky, Ashish Mahendra, Yiping Kang, Krisztian Flautner, Lingjia Tang, Jason Mars,
- Abstract summary: TOBUGraph is a graph-based retrieval framework that first constructs the knowledge graph from unstructured data.<n>It extracts structured knowledge and diverse relationships among data, going beyond RAG's text-to-text similarity.<n>We demonstrate TOBUGraph's effectiveness in TOBU, a real-world application in production for personal memory organization and retrieval.
- Score: 3.8704987495086542
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Retrieval-Augmented Generation (RAG) is one of the leading and most widely used techniques for enhancing LLM retrieval capabilities, but it still faces significant limitations in commercial use cases. RAG primarily relies on the query-chunk text-to-text similarity in the embedding space for retrieval and can fail to capture deeper semantic relationships across chunks, is highly sensitive to chunking strategies, and is prone to hallucinations. To address these challenges, we propose TOBUGraph, a graph-based retrieval framework that first constructs the knowledge graph from unstructured data dynamically and automatically. Using LLMs, TOBUGraph extracts structured knowledge and diverse relationships among data, going beyond RAG's text-to-text similarity. Retrieval is achieved through graph traversal, leveraging the extracted relationships and structures to enhance retrieval accuracy, eliminating the need for chunking configurations while reducing hallucination. We demonstrate TOBUGraph's effectiveness in TOBU, a real-world application in production for personal memory organization and retrieval. Our evaluation using real user data demonstrates that TOBUGraph outperforms multiple RAG implementations in both precision and recall, significantly improving user experience through improved retrieval accuracy.
Related papers
- GraphRAFT: Retrieval Augmented Fine-Tuning for Knowledge Graphs on Graph Databases [0.0]
GraphRAFT is a retrieve-and-reason framework that finetunes LLMs to generate provably correct Cypher queries.
Our method is the first such solution that can be taken off-the-shelf and used on Knowledge Graphs stored in native graph DBs.
arXiv Detail & Related papers (2025-04-07T20:16:22Z) - RGL: A Graph-Centric, Modular Framework for Efficient Retrieval-Augmented Generation on Graphs [58.10503898336799]
We introduce the RAG-on-Graphs Library (RGL), a modular framework that seamlessly integrates the complete RAG pipeline.
RGL addresses key challenges by supporting a variety of graph formats and integrating optimized implementations for essential components.
Our evaluations demonstrate that RGL not only accelerates the prototyping process but also enhances the performance and applicability of graph-based RAG systems.
arXiv Detail & Related papers (2025-03-25T03:21:48Z) - Pseudo-Knowledge Graph: Meta-Path Guided Retrieval and In-Graph Text for RAG-Equipped LLM [8.941718961724984]
Pseudo-Knowledge Graph (PKG) framework integrates Meta-path Retrieval, In-graph Text and Vector Retrieval into Large Language Models.
PKG offers a richer knowledge representation and improves accuracy in information retrieval.
arXiv Detail & Related papers (2025-03-01T02:39:37Z) - Are Large Language Models In-Context Graph Learners? [31.172657860606297]
Large language models (LLMs) have remarkable in-context reasoning capabilities across a wide range of tasks.
However, they struggle to handle structured data, such as graphs, due to their lack of understanding of non-Euclidean structures.
We show that learning on graph data can be conceptualized as a retrieval-augmented generation (RAG) process.
We propose a series of RAG frameworks to enhance the in-context learning capabilities of LLMs for graph learning tasks.
arXiv Detail & Related papers (2025-02-19T09:14:19Z) - G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable Recommendation [48.23263809469786]
We propose a framework using graph retrieval-augmented large language models (LLMs) for explainable recommendation.
G-Refer achieves superior performance compared with existing methods in both explainability and stability.
arXiv Detail & Related papers (2025-02-18T06:42:38Z) - RAG vs. GraphRAG: A Systematic Evaluation and Key Insights [42.31801859160484]
We systematically evaluate Retrieval-Augmented Generation (RAG) and GraphRAG on text-based benchmarks.
Our results highlight the distinct strengths of RAG and GraphRAG across different tasks and evaluation perspectives.
arXiv Detail & Related papers (2025-02-17T02:36:30Z) - VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents [66.42579289213941]
Retrieval-augmented generation (RAG) is an effective technique that enables large language models to utilize external knowledge sources for generation.
We introduce VisRAG, which tackles this issue by establishing a vision-language model (VLM)-based RAG pipeline.
In this pipeline, instead of first parsing the document to obtain text, the document is directly embedded using a VLM as an image and then retrieved to enhance the generation of a VLM.
arXiv Detail & Related papers (2024-10-14T15:04:18Z) - Generative Subgraph Retrieval for Knowledge Graph-Grounded Dialog Generation [17.437568540883106]
We propose Dialog generation with Generative Subgraph Retrieval (DialogGSR)
DialogGSR retrieves relevant knowledge subgraphs by directly generating their token sequences on top of language models.
It achieves state-of-the-art performance in knowledge graph-grounded dialog generation, as demonstrated on OpenDialKG and KOMODIS datasets.
arXiv Detail & Related papers (2024-10-12T03:33:42Z) - Graph Memory Learning: Imitating Lifelong Remembering and Forgetting of Brain Networks [31.554027786868815]
This paper introduces a new concept of graph memory learning - Brain-inspired Graph Memory Learning (BGML)
BGML incorporates a multi-granular hierarchical progressive learning mechanism rooted in feature graph grain learning to mitigate potential conflict between memorization and forgetting.
In addition, to tackle the issue of unreliable structures in newly added incremental information, the paper introduces an information self-assessment ownership mechanism.
arXiv Detail & Related papers (2024-07-27T05:50:54Z) - AriGraph: Learning Knowledge Graph World Models with Episodic Memory for LLM Agents [19.249596397679856]
AriGraph is a memory graph that integrates semantic and episodic memories while exploring the environment.
We demonstrate that our Ariadne LLM agent effectively handles complex tasks within interactive text game environments difficult even for human players.
arXiv Detail & Related papers (2024-07-05T09:06:47Z) - GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models [58.08177466768262]
Long-context capabilities are essential for large language models (LLMs) to tackle complex and long-input tasks.
We introduce GraphReader, a graph-based agent system designed to handle long texts by structuring them into a graph and employing an agent to explore this graph autonomously.
Experimental results on the LV-Eval dataset reveal that GraphReader, using a 4k context window, consistently outperforms GPT-4-128k across context lengths from 16k to 256k by a large margin.
arXiv Detail & Related papers (2024-06-20T17:57:51Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Retrieval-Enhanced Contrastive Vision-Text Models [61.783728119255365]
We propose to equip vision-text models with the ability to refine their embedding with cross-modal retrieved information from a memory at inference time.
Remarkably, we show that this can be done with a light-weight, single-layer, fusion transformer on top of a frozen CLIP.
Our experiments validate that our retrieval-enhanced contrastive (RECO) training improves CLIP performance substantially on several challenging fine-grained tasks.
arXiv Detail & Related papers (2023-06-12T15:52:02Z) - Improving Image Recognition by Retrieving from Web-Scale Image-Text Data [68.63453336523318]
We introduce an attention-based memory module, which learns the importance of each retrieved example from the memory.
Compared to existing approaches, our method removes the influence of the irrelevant retrieved examples, and retains those that are beneficial to the input query.
We show that it achieves state-of-the-art accuracies in ImageNet-LT, Places-LT and Webvision datasets.
arXiv Detail & Related papers (2023-04-11T12:12:05Z) - Relational Memory Augmented Language Models [40.626389607433936]
We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph.
Our approach produces a better language model in terms of perplexity and bits per character.
arXiv Detail & Related papers (2022-01-24T13:25:41Z) - Video Object Segmentation with Episodic Graph Memory Networks [198.74780033475724]
A graph memory network is developed to address the novel idea of "learning to update the segmentation model"
We exploit an episodic memory network, organized as a fully connected graph, to store frames as nodes and capture cross-frame correlations by edges.
The proposed graph memory network yields a neat yet principled framework, which can generalize well both one-shot and zero-shot video object segmentation tasks.
arXiv Detail & Related papers (2020-07-14T13:19:19Z) - User Memory Reasoning for Conversational Recommendation [68.34475157544246]
We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests.
MGConvRex captures human-level reasoning over user memory and has disjoint training/testing sets of users for zero-shot (cold-start) reasoning for recommendation.
arXiv Detail & Related papers (2020-05-30T05:29:23Z) - PeTra: A Sparsely Supervised Memory Model for People Tracking [50.98911178059019]
We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots.
We empirically compare key modeling choices, finding that we can simplify several aspects of the design of the memory module while retaining strong performance.
PeTra is highly effective in both evaluations, demonstrating its ability to track people in its memory despite being trained with limited annotation.
arXiv Detail & Related papers (2020-05-06T17:45:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.