TERAG: Token-Efficient Graph-Based Retrieval-Augmented Generation
- URL: http://arxiv.org/abs/2509.18667v3
- Date: Mon, 10 Nov 2025 14:28:10 GMT
- Title: TERAG: Token-Efficient Graph-Based Retrieval-Augmented Generation
- Authors: Qiao Xiao, Hong Ting Tsang, Jiaxin Bai,
- Abstract summary: We propose TERAG, a framework designed to build informative graphs at a significantly lower cost.<n>Inspired by HippoRAG, we incorporate Personalized PageRank (PPR) during the retrieval phase.<n>We achieve at least 80% of the accuracy of widely used graph-based RAG methods while consuming only 3%-11% of the output tokens.
- Score: 15.606880864583436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph-based Retrieval-augmented generation (RAG) has become a widely studied approach for improving the reasoning, accuracy, and factuality of Large Language Models (LLMs). However, many existing graph-based RAG systems overlook the high cost associated with LLM token usage during graph construction, hindering large-scale adoption. To address this, we propose TERAG, a simple yet effective framework designed to build informative graphs at a significantly lower cost. Inspired by HippoRAG, we incorporate Personalized PageRank (PPR) during the retrieval phase, and we achieve at least 80% of the accuracy of widely used graph-based RAG methods while consuming only 3%-11% of the output tokens. With its low token footprint and efficient construction pipeline, TERAG is well-suited for large-scale and cost-sensitive deployment scenarios.
Related papers
- HELP: HyperNode Expansion and Logical Path-Guided Evidence Localization for Accurate and Efficient GraphRAG [53.30561659838455]
Large Language Models (LLMs) often struggle with inherent knowledge boundaries and hallucinations.<n>Retrieval-Augmented Generation (RAG) frequently overlooks structural interdependencies essential for multi-hop reasoning.<n>Help achieves competitive performance across multiple simple and multi-hop QA benchmarks and up to a 28.8$times$ speedup over leading Graph-based RAG baselines.
arXiv Detail & Related papers (2026-02-24T14:05:29Z) - LinearRAG: Linear Graph Retrieval Augmented Generation on Large-scale Corpora [17.929144506419064]
Retrieval-Augmented Generation (RAG) is widely used to mitigate hallucinations of Large Language Models (LLMs) by leveraging external knowledge.<n>Existing graph-based RAG methods rely on unstable and costly relation extraction for graph construction.<n>We propose LinearRAG, an efficient framework that enables reliable graph construction and precise passage retrieval.
arXiv Detail & Related papers (2025-10-11T08:43:45Z) - G-reasoner: Foundation Models for Unified Reasoning over Graph-structured Knowledge [88.82814893945077]
Large language models (LLMs) excel at complex reasoning but remain limited by static and incomplete parametric knowledge.<n>Recent graph-enhanced RAG (GraphRAG) attempts to bridge this gap by constructing tailored graphs and enabling LLMs to reason on them.<n>G-reasoner is a unified framework that integrates graph and language foundation models for reasoning over diverse graph-structured knowledge.
arXiv Detail & Related papers (2025-09-29T04:38:12Z) - GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models [59.72897499248909]
We propose a novel graph retriever trained end-to-end with Large Language Models (LLMs)<n>Within the extracted subgraph, structural knowledge and semantic features are encoded via soft tokens and the verbalized graph, respectively, which are infused into the LLM together.<n>Our approach consistently achieves state-of-the-art performance, validating the strength of joint graph-LLM optimization for complex reasoning tasks.
arXiv Detail & Related papers (2025-09-20T02:38:00Z) - Zero-shot Graph Reasoning via Retrieval Augmented Framework with LLMs [15.558119182035995]
We propose a new, training-free method, Graph Reasoning via Retrieval Augmented Framework (GRRAF)<n> GRRAF harnesses retrieval-augmented generation (RAG) alongside the code-generation capabilities of large language models (LLMs) to address a wide range of graph reasoning tasks.<n> Experimental evaluations on the GraphInstruct dataset reveal that GRRAF achieves 100% accuracy on most graph reasoning tasks.
arXiv Detail & Related papers (2025-09-16T06:58:58Z) - Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning [20.05893083101089]
Graph-R1 is an agentic GraphRAG framework via end-to-end reinforcement learning (RL)<n>It introduces lightweight knowledge hypergraph construction, models retrieval as a multi-turn agent-environment interaction.<n>Experiments on standard RAG datasets show that Graph-R1 outperforms traditional GraphRAG and RL-enhanced RAG methods in reasoning accuracy, retrieval efficiency, and generation quality.
arXiv Detail & Related papers (2025-07-29T15:01:26Z) - Efficient Knowledge Graph Construction and Retrieval from Unstructured Text for Large-Scale RAG Systems [0.06597195879147556]
We propose a scalable and cost-efficient framework for deploying Graph-based Retrieval Augmented Generation (GraphRAG) in enterprise environments.<n>These results validate the feasibility of deploying GraphRAG systems in real-world, large-scale enterprise applications.
arXiv Detail & Related papers (2025-07-04T00:05:55Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - When to use Graphs in RAG: A Comprehensive Analysis for Graph Retrieval-Augmented Generation [25.508719115522645]
Graph retrieval-augmented generation (GraphRAG) has emerged as a powerful paradigm for enhancing large language models (LLMs) with external knowledge.<n>Recent studies report that GraphRAG frequently underperforms vanilla RAG on many real-world tasks.<n>This raises a critical question: Is GraphRAG really effective, and in which scenarios do graph structures provide measurable benefits for RAG systems?
arXiv Detail & Related papers (2025-06-06T02:37:47Z) - NodeRAG: Structuring Graph-based RAG with Heterogeneous Nodes [25.173078967881803]
Retrieval-augmented generation (RAG) empowers large language models to access external and private corpus.<n>Current graph-based RAG approaches seldom prioritize the design of graph structures.<n>Inadequately designed graph not only impede the seamless integration of diverse graph algorithms but also result in workflow inconsistencies.<n>We propose NodeRAG, a graph-centric framework introducing heterogeneous graph structures.
arXiv Detail & Related papers (2025-04-15T18:24:00Z) - RGL: A Graph-Centric, Modular Framework for Efficient Retrieval-Augmented Generation on Graphs [58.10503898336799]
We introduce the RAG-on-Graphs Library (RGL), a modular framework that seamlessly integrates the complete RAG pipeline.<n>RGL addresses key challenges by supporting a variety of graph formats and integrating optimized implementations for essential components.<n>Our evaluations demonstrate that RGL not only accelerates the prototyping process but also enhances the performance and applicability of graph-based RAG systems.
arXiv Detail & Related papers (2025-03-25T03:21:48Z) - Retrieval-Augmented Generation with Graphs (GraphRAG) [84.29507404866257]
Retrieval-augmented generation (RAG) is a powerful technique that enhances downstream task execution by retrieving additional information.<n>Graph, by its intrinsic "nodes connected by edges" nature, encodes massive heterogeneous and relational information.<n>Unlike conventional RAG, the uniqueness of graph-structured data, such as diverse-formatted and domain-specific relational knowledge, poses unique and significant challenges when designing GraphRAG for different domains.
arXiv Detail & Related papers (2024-12-31T06:59:35Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.