Vgent: Graph-based Retrieval-Reasoning-Augmented Generation For Long Video Understanding
- URL: http://arxiv.org/abs/2510.14032v1
- Date: Wed, 15 Oct 2025 19:14:58 GMT
- Title: Vgent: Graph-based Retrieval-Reasoning-Augmented Generation For Long Video Understanding
- Authors: Xiaoqian Shen, Wenxuan Zhang, Jun Chen, Mohamed Elhoseiny,
- Abstract summary: Vgent is a graph-based retrieval-reasoning-augmented generation framework to enhance LVLMs for long video understanding.<n>We evaluate our framework with various open-source LVLMs on three long-video understanding benchmarks.
- Score: 56.45689495743107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding and reasoning over long videos pose significant challenges for large video language models (LVLMs) due to the difficulty in processing intensive video tokens beyond context window and retaining long-term sequential information. Retrieval-Augmented Generation (RAG) has demonstrated effectiveness in processing long context for Large Language Models (LLMs); however, applying RAG to long video faces challenges such as disrupted temporal dependencies and inclusion of irrelevant information that can hinder accurate reasoning. To address these limitations, we propose Vgent, a novel graph-based retrieval-reasoning-augmented generation framework to enhance LVLMs for long video understanding. Our approach introduces two key innovations: (i) It represents videos by structured graphs with semantic relationships across video clips preserved to improve retrieval effectiveness. (ii) It introduces an intermediate reasoning step to mitigate the reasoning limitation of LVLMs, which leverages structured verification to reduce retrieval noise and facilitate the explicit aggregation of relevant information across clips, resulting in more accurate and context-aware responses. We comprehensively evaluate our framework with various open-source LVLMs on three long-video understanding benchmarks. Our approach yielded an overall performance improvement of $3.0\%\sim 5.4\%$ over base models on MLVU, and outperformed state-of-the-art video RAG methods by $8.6\%$. Our code is publicly available at https://xiaoqian-shen.github.io/Vgent.
Related papers
- Video-QTR: Query-Driven Temporal Reasoning Framework for Lightweight Video Understanding [37.682165829414494]
Video-QTR is a lightweight framework that redefines video comprehension as a query-guided reasoning process.<n>We show that Video-QTR achieves state-of-the-art performance while reducing input frame consumption by up to 73%.
arXiv Detail & Related papers (2025-12-10T06:28:00Z) - VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management [17.645183933549458]
VideoMem is a novel framework that pioneers models long video understanding as a sequential generation task via adaptive memory management.<n>We show that VideoMem significantly outperforms existing open-source models across diverse benchmarks for ultra-long video understanding tasks.
arXiv Detail & Related papers (2025-12-04T07:42:13Z) - ReAG: Reasoning-Augmented Generation for Knowledge-based Visual Question Answering [54.72902502486611]
ReAG is a Reasoning-Augmented Multimodal RAG approach that combines coarse- and fine-grained retrieval with a critic model that filters irrelevant passages.<n>ReAG significantly outperforms prior methods, improving answer accuracy and providing interpretable reasoning grounded in retrieved evidence.
arXiv Detail & Related papers (2025-11-27T19:01:02Z) - AdaVideoRAG: Omni-Contextual Adaptive Retrieval-Augmented Efficient Long Video Understanding [73.60257070465377]
AdaVideoRAG is a novel framework that adapts retrieval based on query complexity using a lightweight intent classifier.<n>Our framework employs an Omni-Knowledge Indexing module to build hierarchical databases from text (captions, ASR, OCR), visual features, and semantic graphs.<n> Experiments demonstrate improved efficiency and accuracy for long-video understanding, with seamless integration into existing MLLMs.
arXiv Detail & Related papers (2025-06-16T15:18:15Z) - InfLVG: Reinforce Inference-Time Consistent Long Video Generation with GRPO [73.33751812982342]
InfLVG is an inference-time framework that enables coherent long video generation without requiring additional long-form video data.<n>We show that InfLVG can extend video length by up to 9$times$, achieving strong consistency and semantic fidelity across scenes.
arXiv Detail & Related papers (2025-05-23T07:33:25Z) - RAVU: Retrieval Augmented Video Understanding with Compositional Reasoning over Graph [3.1671311914949545]
RAVU is a framework for video enhanced understanding by retrieval with reasoning over atemporal graph.<n>We construct a graph representation of capturing the video both spatial and temporal relationships between entities.<n>To answer complex queries, we decompose the queries into a sequence of reasoning steps and execute these steps on the graph.<n>Our approach enables more accurate understanding of long videos, particularly for queries that require multi-hop reasoning and tracking objects across frames.
arXiv Detail & Related papers (2025-05-06T04:38:09Z) - SALOVA: Segment-Augmented Long Video Assistant for Targeted Retrieval and Routing in Long-Form Video Analysis [52.050036778325094]
We introduce SALOVA: Segment-Augmented Video Assistant, a novel video-LLM framework designed to enhance the comprehension of lengthy video content.<n>We present a high-quality collection of 87.8K long videos, each densely captioned at the segment level to enable models to capture scene continuity and maintain rich context.<n>Our framework mitigates the limitations of current video-LMMs by allowing for precise identification and retrieval of relevant video segments in response to queries.
arXiv Detail & Related papers (2024-11-25T08:04:47Z) - Investigating Video Reasoning Capability of Large Language Models with Tropes in Movies [69.28082193942991]
This paper introduces a novel dataset, Tropes in Movies (TiM), designed as a testbed for exploring two critical yet previously overlooked video reasoning skills.
utilizing tropes from movie storytelling, TiM evaluates the reasoning capabilities of state-of-the-art LLM-based approaches.
To address these deficiencies, we propose Face-Enhanced Viper of Role Interactions (FEVoRI) and Context Query Reduction (ConQueR)
arXiv Detail & Related papers (2024-06-16T12:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.