TableRAG: A Retrieval Augmented Generation Framework for Heterogeneous Document Reasoning
- URL: http://arxiv.org/abs/2506.10380v1
- Date: Thu, 12 Jun 2025 06:16:49 GMT
- Title: TableRAG: A Retrieval Augmented Generation Framework for Heterogeneous Document Reasoning
- Authors: Xiaohan Yu, Pu Jian, Chong Chen,
- Abstract summary: Retrieval-Augmented Generation (RAG) has demonstrated considerable effectiveness in open-domain question answering.<n>Existing RAG approaches exhibit critical limitations when applied to heterogeneous documents.<n>We propose TableRAG, a framework that unifies textual understanding and complex manipulations over tabular data.<n>We also develop HeteQA, a novel benchmark designed to evaluate the multi-hop heterogeneous reasoning capabilities.
- Score: 3.1480184228320205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retrieval-Augmented Generation (RAG) has demonstrated considerable effectiveness in open-domain question answering. However, when applied to heterogeneous documents, comprising both textual and tabular components, existing RAG approaches exhibit critical limitations. The prevailing practice of flattening tables and chunking strategies disrupts the intrinsic tabular structure, leads to information loss, and undermines the reasoning capabilities of LLMs in multi-hop, global queries. To address these challenges, we propose TableRAG, an hybrid framework that unifies textual understanding and complex manipulations over tabular data. TableRAG iteratively operates in four steps: context-sensitive query decomposition, text retrieval, SQL programming and execution, and compositional intermediate answer generation. We also develop HeteQA, a novel benchmark designed to evaluate the multi-hop heterogeneous reasoning capabilities. Experimental results demonstrate that TableRAG consistently outperforms existing baselines on both public datasets and our HeteQA, establishing a new state-of-the-art for heterogeneous document question answering. We release TableRAG at https://github.com/yxh-y/TableRAG/tree/main.
Related papers
- Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation [69.45495166424642]
We develop a robust and discriminative QA benchmark to measure temporal, causal, and character consistency understanding in narrative documents.<n>We then introduce Entity-Event RAG (E2RAG), a dual-graph framework that keeps separate entity and event subgraphs linked by a bipartite mapping.<n>Across ChronoQA, our approach outperforms state-of-the-art unstructured and KG-based RAG baselines, with notable gains on causal and character consistency queries.
arXiv Detail & Related papers (2025-06-06T10:07:21Z) - HD-RAG: Retrieval-Augmented Generation for Hybrid Documents Containing Text and Hierarchical Tables [2.915799083273604]
We introduce HD-RAG, a novel framework that incorporates a row-and-column level table representation.<n>We conduct comprehensive experiments with DocRAGLib, showing that HD-RAG outperforms existing baselines in both retrieval accuracy and QA performance.
arXiv Detail & Related papers (2025-04-13T13:02:33Z) - GTR: Graph-Table-RAG for Cross-Table Question Answering [53.11230952572134]
We propose the first Graph-Table-RAG framework, namely GTR, which reorganizes table corpora into a heterogeneous graph.<n> GTR exhibits superior cross-table question-answering performance while maintaining high deployment efficiency, demonstrating its real-world practical applicability.
arXiv Detail & Related papers (2025-04-02T04:24:41Z) - SRAG: Structured Retrieval-Augmented Generation for Multi-Entity Question Answering over Wikipedia Graph [10.297615455470133]
Multi-entity question answering (MEQA) poses significant challenges for large language models.<n>This paper introduces a structured RAG framework that organizes extracted entities into relational tables.<n>Experiments on Wikipedia-based multi-entity QA tasks demonstrate that SRAG significantly outperforms state-of-the-art long-context LLMs.
arXiv Detail & Related papers (2025-03-03T09:37:33Z) - Mixture of Structural-and-Textual Retrieval over Text-rich Graph Knowledge Bases [78.62158923194153]
Text-rich Graph Knowledge Bases (TG-KBs) have become increasingly crucial for answering queries by providing textual and structural knowledge.<n>We propose a Mixture of Structural-and-Textual Retrieval (MoR) to retrieve these two types of knowledge via a Planning-Reasoning-Organizing framework.
arXiv Detail & Related papers (2025-02-27T17:42:52Z) - PathRAG: Pruning Graph-based Retrieval Augmented Generation with Relational Paths [42.01377074786958]
Retrieval-augmented generation (RAG) improves the response quality of large language models (LLMs) by retrieving knowledge from external databases.<n>We propose PathRAG, which retrieves key relational paths from the indexing graph, and converts these paths into textual form for prompting LLMs.<n>PathRAG consistently outperforms state-of-the-art baselines across six datasets and five evaluation dimensions.
arXiv Detail & Related papers (2025-02-18T11:18:55Z) - QuOTE: Question-Oriented Text Embeddings [8.377715521597292]
QuOTE (Question-Oriented Text Embeddings) is a novel enhancement to retrieval-augmented generation (RAG) systems.<n>Unlike traditional RAG pipelines, QuOTE augments chunks with hypothetical questions that the chunk can potentially answer.<n>We demonstrate that QuOTE significantly enhances retrieval accuracy, including in multi-hop question-answering tasks.
arXiv Detail & Related papers (2025-02-16T03:37:13Z) - TableRAG: Million-Token Table Understanding with Language Models [53.039560091592215]
TableRAG is a Retrieval-Augmented Generation (RAG) framework specifically designed for LM-based table understanding.<n>TableRAG leverages query expansion combined with schema and cell retrieval to pinpoint crucial information before providing it to the LMs.<n>Our results demonstrate that TableRAG achieves the highest retrieval quality, leading to the new state-of-the-art performance on large-scale table understanding.
arXiv Detail & Related papers (2024-10-07T04:15:02Z) - Beyond Extraction: Contextualising Tabular Data for Efficient
Summarisation by Language Models [0.0]
The conventional use of the Retrieval-Augmented Generation architecture has proven effective for retrieving information from diverse documents.
This research introduces an innovative approach to enhance the accuracy of complex table queries in RAG-based systems.
arXiv Detail & Related papers (2024-01-04T16:16:14Z) - Doc2SoarGraph: Discrete Reasoning over Visually-Rich Table-Text
Documents via Semantic-Oriented Hierarchical Graphs [79.0426838808629]
We propose TAT-DQA, i.e. to answer the question over a visually-rich table-text document.
Specifically, we propose a novel Doc2SoarGraph framework with enhanced discrete reasoning capability.
We conduct extensive experiments on TAT-DQA dataset, and the results show that our proposed framework outperforms the best baseline model by 17.73% and 16.91% in terms of Exact Match (EM) and F1 score respectively on the test set.
arXiv Detail & Related papers (2023-05-03T07:30:32Z) - Mixed-modality Representation Learning and Pre-training for Joint
Table-and-Text Retrieval in OpenQA [85.17249272519626]
An optimized OpenQA Table-Text Retriever (OTTeR) is proposed.
We conduct retrieval-centric mixed-modality synthetic pre-training.
OTTeR substantially improves the performance of table-and-text retrieval on the OTT-QA dataset.
arXiv Detail & Related papers (2022-10-11T07:04:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.