Agentic Very Long Video Understanding
- URL: http://arxiv.org/abs/2601.18157v1
- Date: Mon, 26 Jan 2026 05:20:47 GMT
- Title: Agentic Very Long Video Understanding
- Authors: Aniket Rege, Arka Sadhu, Yuliang Li, Kejie Li, Ramya Korlakai Vinayak, Yuning Chai, Yong Jae Lee, Hyo Jin Kim,
- Abstract summary: EGAgent is an enhanced agentic framework centered on entity scene graphs, which represent people, places, objects, and their relationships over time.<n>Our system equips a planning agent with tools for structured search and reasoning over these graphs, as well as hybrid visual and audio search capabilities, enabling detailed, cross-modal, and temporally coherent reasoning.<n>EgoLifeQA and Video-MME (Long) datasets show that our method achieves state-of-the-art performance on EgoLifeQA (57.5%) and competitive performance on Video-MME (Long) (74.1%) for complex longitudinal video understanding tasks.
- Score: 39.34545320553102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of always-on personal AI assistants, enabled by all-day wearable devices such as smart glasses, demands a new level of contextual understanding, one that goes beyond short, isolated events to encompass the continuous, longitudinal stream of egocentric video. Achieving this vision requires advances in long-horizon video understanding, where systems must interpret and recall visual and audio information spanning days or even weeks. Existing methods, including large language models and retrieval-augmented generation, are constrained by limited context windows and lack the ability to perform compositional, multi-hop reasoning over very long video streams. In this work, we address these challenges through EGAgent, an enhanced agentic framework centered on entity scene graphs, which represent people, places, objects, and their relationships over time. Our system equips a planning agent with tools for structured search and reasoning over these graphs, as well as hybrid visual and audio search capabilities, enabling detailed, cross-modal, and temporally coherent reasoning. Experiments on the EgoLifeQA and Video-MME (Long) datasets show that our method achieves state-of-the-art performance on EgoLifeQA (57.5%) and competitive performance on Video-MME (Long) (74.1%) for complex longitudinal video understanding tasks.
Related papers
- ARC-Hunyuan-Video-7B: Structured Video Comprehension of Real-World Shorts [56.75723197779384]
ARC-Hunyuan-Video is a multimodal model that processes visual, audio, and textual signals end-to-end for structured comprehension.<n>Our model is capable of multi-granularity timestamped video captioning and summarization, open-ended video question answering, temporal video grounding, and video reasoning.
arXiv Detail & Related papers (2025-07-28T15:52:36Z) - Infinite Video Understanding [50.78256932424239]
We argue that framing Infinite Video Understanding as a blue-sky research objective provides a vital north star for the multimedia.<n>We outline the core challenges and key research directions towards achieving this transformative capability.
arXiv Detail & Related papers (2025-07-11T23:07:04Z) - Deep Video Discovery: Agentic Search with Tool Use for Long-form Video Understanding [60.88843818016968]
Long-form video understanding presents significant challenges due to temporal-spatial complexity and difficulty of question answering.<n>We propose the Deep Video Discovery (DVD) agent to leverage an agentic search strategy over segmented video clips.<n>Our DVD agent achieves state-of-the-art performance on the challenging LVBench dataset, reaching an accuracy of 74.2%.
arXiv Detail & Related papers (2025-05-23T16:37:36Z) - RAVU: Retrieval Augmented Video Understanding with Compositional Reasoning over Graph [3.1671311914949545]
RAVU is a framework for video enhanced understanding by retrieval with reasoning over atemporal graph.<n>We construct a graph representation of capturing the video both spatial and temporal relationships between entities.<n>To answer complex queries, we decompose the queries into a sequence of reasoning steps and execute these steps on the graph.<n>Our approach enables more accurate understanding of long videos, particularly for queries that require multi-hop reasoning and tracking objects across frames.
arXiv Detail & Related papers (2025-05-06T04:38:09Z) - SALOVA: Segment-Augmented Long Video Assistant for Targeted Retrieval and Routing in Long-Form Video Analysis [52.050036778325094]
We introduce SALOVA: Segment-Augmented Video Assistant, a novel video-LLM framework designed to enhance the comprehension of lengthy video content.<n>We present a high-quality collection of 87.8K long videos, each densely captioned at the segment level to enable models to capture scene continuity and maintain rich context.<n>Our framework mitigates the limitations of current video-LMMs by allowing for precise identification and retrieval of relevant video segments in response to queries.
arXiv Detail & Related papers (2024-11-25T08:04:47Z) - From Seconds to Hours: Reviewing MultiModal Large Language Models on Comprehensive Long Video Understanding [52.696422425058245]
MultiModal Large Language Models (LLMs) with visual encoders has recently shown promising performance in visual understanding tasks.<n>Our paper focuses on the substantial differences and unique challenges posed by long video understanding compared to static image and short video understanding.
arXiv Detail & Related papers (2024-09-27T17:38:36Z) - VideoAgent: Long-form Video Understanding with Large Language Model as Agent [26.903040507914053]
We introduce a novel agent-based system, VideoAgent, that employs a large language model as a central agent to identify and compile crucial information to answer a question.
We demonstrate superior effectiveness and efficiency of our method over the current state-of-the-art methods.
arXiv Detail & Related papers (2024-03-15T17:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.