Can GRPO Boost Complex Multimodal Table Understanding?
- URL: http://arxiv.org/abs/2509.16889v2
- Date: Tue, 23 Sep 2025 02:52:42 GMT
- Title: Can GRPO Boost Complex Multimodal Table Understanding?
- Authors: Xiaoqiang Kang, Shengen Wu, Zimu Wang, Yilin Liu, Xiaobo Jin, Kaizhu Huang, Wei Wang, Yutao Yue, Xiaowei Huang, Qiufeng Wang,
- Abstract summary: Table-R1 is a three-stage reinforcement learning framework for multimodal table understanding.<n>It can boost the model's table reasoning performance obviously on both held-in and held-out datasets.
- Score: 41.72642230279542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing table understanding methods face challenges due to complex table structures and intricate logical reasoning. While supervised finetuning (SFT) dominates existing research, reinforcement learning (RL), such as Group Relative Policy Optimization (GRPO), has shown promise but struggled with low initial policy accuracy and coarse rewards in tabular contexts. In this paper, we introduce Table-R1, a three-stage RL framework that enhances multimodal table understanding through: (1) Warm-up that prompts initial perception and reasoning capabilities, (2) Perception Alignment GRPO (PA-GRPO), which employs continuous Tree-Edit-Distance Similarity (TEDS) rewards for recognizing table structures and contents, and (3) Hint-Completion GRPO (HC-GRPO), which utilizes fine-grained rewards of residual steps based on the hint-guided question. Extensive experiments demonstrate that Table-R1 can boost the model's table reasoning performance obviously on both held-in and held-out datasets, outperforming SFT and GRPO largely. Notably, Qwen2-VL-7B with Table-R1 surpasses larger specific table understanding models (e.g., Table-LLaVA 13B), even achieving comparable performance to the closed-source model GPT-4o on held-in datasets, demonstrating the efficacy of each stage of Table-R1 in overcoming initialization bottlenecks and reward sparsity, thereby advancing robust multimodal table understanding.
Related papers
- TableGPT-R1: Advancing Tabular Reasoning Through Reinforcement Learning [28.052232941379884]
TableGPT-R1 is a specialized model built on a systematicReinforcement Learning framework.<n>Our approach synthesizes difficulty-stratified agentic trajectories for both supervised alignment and RL rollouts.<n>It achieves state-of-the-art performance on authoritative benchmarks.
arXiv Detail & Related papers (2025-12-23T12:30:37Z) - TabR1: Taming GRPO for tabular reasoning LLMs [12.303771262614484]
This paper presents TabR1, the first reasoning LLM for tabular prediction with multi-step reasoning.<n>At its core is Permutation Relative Policy Optimization (PRPO), a simple yet efficient reinforcement learning method.<n>PRPO transforms sparse rewards into dense learning signals and improves generalization.
arXiv Detail & Related papers (2025-10-20T10:22:01Z) - Table-r1: Self-supervised and Reinforcement Learning for Program-based Table Reasoning in Small Language Models [52.94091440130039]
Table reasoning (TR) requires structured reasoning over semi-structured data.<n>Small language models (SLMs) have limited capacity compared to large LMs (LLMs, e.g., GPT-4o)<n>We propose program-based TR (P-TR), which circumvents key limitations of text-based TR (T-TR) by generating executable programs.<n>Experiments on four TR benchmarks demonstrate that Table-r1 outperforms all SLM-based methods.
arXiv Detail & Related papers (2025-06-06T14:52:19Z) - Multimodal Tabular Reasoning with Privileged Structured Information [67.40011423365712]
We introduce TabUlar Reasoning with Bridged infOrmation (sc Turbo)<n>sc Turbo benefits from a structure-aware reasoning trace generator based on DeepSeek-R1.<n>sc Turbo achieves state-of-the-art performance ($+7.2%$ vs. previous SOTA) across multiple datasets.
arXiv Detail & Related papers (2025-06-04T15:46:30Z) - Reasoning-Table: Exploring Reinforcement Learning for Table Reasoning [24.624844234355734]
Reasoning-Table is the first application of reinforcement learning (RL) to table reasoning, achieving state-of-the-art performance.<n> Reasoning-Table emerges as a robust table reasoning large language model, surpassing larger proprietary models like Claude-3.7-Sonnet by 4.0%.
arXiv Detail & Related papers (2025-06-02T14:18:09Z) - Table-R1: Inference-Time Scaling for Table Reasoning [25.481170375825812]
We develop and evaluate two post-training strategies to enable inference-time scaling.<n>For distillation, we introduce a large-scale dataset of reasoning traces generated by DeepSeek-R1.<n>For RLVR, we propose task-specific verifiable reward functions and apply the GRPO algorithm to obtain the Table-R1-Zero model.
arXiv Detail & Related papers (2025-05-29T16:28:50Z) - Table-R1: Region-based Reinforcement Learning for Table Understanding [34.213738690633896]
We introduce region-based Table-R1, a novel reinforcement learning approach that enhances table understanding.<n>Our method employs Region-Enhanced Supervised Fine-Tuning (RE-SFT) to guide models in identifying relevant table regions.<n>Experiments show that Table-R1 achieves an average performance improvement of 14.36 points across multiple base models.
arXiv Detail & Related papers (2025-05-18T13:40:18Z) - HIPPO: Enhancing the Table Understanding Capability of Large Language Models through Hybrid-Modal Preference Optimization [48.240146108630704]
This paper introduces the HybrId-modal Preference oPtimizatiOn (HIPPO) model, which represents tables using both text and image.<n> Experimental results on table question answering and table fact verification tasks demonstrate the effectiveness of HIPPO.
arXiv Detail & Related papers (2025-02-24T16:50:55Z) - TableRAG: Million-Token Table Understanding with Language Models [53.039560091592215]
TableRAG is a Retrieval-Augmented Generation (RAG) framework specifically designed for LM-based table understanding.<n>TableRAG leverages query expansion combined with schema and cell retrieval to pinpoint crucial information before providing it to the LMs.<n>Our results demonstrate that TableRAG achieves the highest retrieval quality, leading to the new state-of-the-art performance on large-scale table understanding.
arXiv Detail & Related papers (2024-10-07T04:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.