Beyond Caption-Based Queries for Video Moment Retrieval
- URL: http://arxiv.org/abs/2603.02363v1
- Date: Mon, 02 Mar 2026 20:06:41 GMT
- Title: Beyond Caption-Based Queries for Video Moment Retrieval
- Authors: David Pujol-Perich, Albert Clapés, Dima Damen, Sergio Escalera, Michael Wray,
- Abstract summary: We investigate degradation of VMR methods when trained on caption-based queries but evaluated on search queries.<n>We introduce three benchmarks by modifying the textual queries in three public VMR datasets.<n>Our approach improves performance on search queries by up to 14.82% mAP_m, and up to 21.83% mAP_m on multi-moment search queries.
- Score: 60.31221310786333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we investigate the degradation of existing VMR methods, particularly of DETR architectures, when trained on caption-based queries but evaluated on search queries. For this, we introduce three benchmarks by modifying the textual queries in three public VMR datasets -- i.e., HD-EPIC, YouCook2 and ActivityNet-Captions. Our analysis reveals two key generalization challenges: (i) A language gap, arising from the linguistic under-specification of search queries, and (ii) a multi-moment gap, caused by the shift from single-moment to multi-moment queries. We also identify a critical issue in these architectures -- an active decoder-query collapse -- as a primary cause of the poor generalization to multi-moment instances. We mitigate this issue with architectural modifications that effectively increase the number of active decoder queries. Extensive experiments demonstrate that our approach improves performance on search queries by up to 14.82% mAP_m, and up to 21.83% mAP_m on multi-moment search queries. The code, models and data are available in the project webpage: https://davidpujol.github.io/beyond-vmr/
Related papers
- Resolving Evidence Sparsity: Agentic Context Engineering for Long-Document Understanding [49.26132236798123]
Vision Language Models (VLMs) have gradually become a primary approach in document understanding.<n>We propose SLEUTH, a multi agent framework that orchestrates a retriever and four collaborative agents in a coarse to fine process.<n>The framework identifies key textual and visual clues within the retrieved pages, filters for salient visual evidence such as tables and charts, and analyzes the query to devise a reasoning strategy.
arXiv Detail & Related papers (2025-11-28T03:09:40Z) - Reasoning-enhanced Query Understanding through Decomposition and Interpretation [87.56450566014625]
ReDI is a Reasoning-enhanced approach for query understanding through Decomposition and Interpretation.<n>We compiled a large-scale dataset of real-world complex queries from a major search engine.<n> Experiments on BRIGHT and BEIR demonstrate that ReDI consistently surpasses strong baselines in both sparse and dense retrieval paradigms.
arXiv Detail & Related papers (2025-09-08T10:58:42Z) - Dual-Stream Attention with Multi-Modal Queries for Object Detection in Transportation Applications [6.603505460200282]
Transformer-based object detectors often struggle with occlusions, fine-grained localization, and computational inefficiency caused by fixed queries and dense attention.<n>We propose DAMM, Dual-stream Attention with Multi-Modal queries, a novel framework introducing both query adaptation and structured cross-attention for improved accuracy and efficiency.
arXiv Detail & Related papers (2025-08-06T20:37:24Z) - Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent [92.5712549836791]
Multimodal Retrieval Augmented Generation (mRAG) plays an important role in mitigating the "hallucination" issue inherent in multimodal large language models (MLLMs)<n>We propose the first self-adaptive planning agent for multimodal retrieval, OmniSearch.
arXiv Detail & Related papers (2024-11-05T09:27:21Z) - An Evaluation Framework for Attributed Information Retrieval using Large Language Models [5.216296688442701]
We propose a framework to evaluate and benchmark attributed information seeking.
Experiments using HAGRID, an attributed information-seeking dataset, show the impact of different scenarios on the correctness and attributability of answers.
arXiv Detail & Related papers (2024-09-12T12:57:08Z) - Database-Augmented Query Representation for Information Retrieval [71.41745087624528]
We present a novel retrieval framework called Database-Augmented Query representation (DAQu)<n>DAQu augments the original query with various (query-related) metadata across multiple tables.<n>We validate our DAQu in diverse retrieval scenarios, demonstrating that it significantly enhances overall retrieval performance.
arXiv Detail & Related papers (2024-06-23T05:02:21Z) - Query Resolution for Conversational Search with Limited Supervision [63.131221660019776]
We propose QuReTeC (Query Resolution by Term Classification), a neural query resolution model based on bidirectional transformers.
We show that QuReTeC outperforms state-of-the-art models, and furthermore, that our distant supervision method can be used to substantially reduce the amount of human-curated data required to train QuReTeC.
arXiv Detail & Related papers (2020-05-24T11:37:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.