Glance and Focus: Memory Prompting for Multi-Event Video Question
Answering
- URL: http://arxiv.org/abs/2401.01529v1
- Date: Wed, 3 Jan 2024 03:51:16 GMT
- Title: Glance and Focus: Memory Prompting for Multi-Event Video Question
Answering
- Authors: Ziyi Bai, Ruiping Wang, Xilin Chen
- Abstract summary: VideoQA has emerged as a vital tool to evaluate agents' ability to understand human daily behaviors.
Humans can easily tackle it by using a series of episode memories as anchors to quickly locate question-related key moments for reasoning.
We propose the Glance-Focus model to mimic this effective reasoning strategy.
- Score: 36.00733800536469
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video Question Answering (VideoQA) has emerged as a vital tool to evaluate
agents' ability to understand human daily behaviors. Despite the recent success
of large vision language models in many multi-modal tasks, complex situation
reasoning over videos involving multiple human-object interaction events still
remains challenging. In contrast, humans can easily tackle it by using a series
of episode memories as anchors to quickly locate question-related key moments
for reasoning. To mimic this effective reasoning strategy, we propose the
Glance-Focus model. One simple way is to apply an action detection model to
predict a set of actions as key memories. However, these actions within a
closed set vocabulary are hard to generalize to various video domains. Instead
of that, we train an Encoder-Decoder to generate a set of dynamic event
memories at the glancing stage. Apart from using supervised bipartite matching
to obtain the event memories, we further design an unsupervised memory
generation method to get rid of dependence on event annotations. Next, at the
focusing stage, these event memories act as a bridge to establish the
correlation between the questions with high-level event concepts and low-level
lengthy video content. Given the question, the model first focuses on the
generated key event memory, then focuses on the most relevant moment for
reasoning through our designed multi-level cross-attention mechanism. We
conduct extensive experiments on four Multi-Event VideoQA benchmarks including
STAR, EgoTaskQA, AGQA, and NExT-QA. Our proposed model achieves
state-of-the-art results, surpassing current large models in various
challenging reasoning tasks. The code and models are available at
https://github.com/ByZ0e/Glance-Focus.
Related papers
- Grounding Partially-Defined Events in Multimodal Data [61.0063273919745]
We introduce a multimodal formulation for partially-defined events and cast the extraction of these events as a three-stage span retrieval task.
We propose a benchmark for this task, MultiVENT-G, that consists of 14.5 hours of densely annotated current event videos and 1,168 text documents, containing 22.8K labeled event-centric entities.
Results illustrate the challenges that abstract event understanding poses and demonstrates promise in event-centric video-language systems.
arXiv Detail & Related papers (2024-10-07T17:59:48Z) - Generating Event-oriented Attribution for Movies via Two-Stage Prefix-Enhanced Multimodal LLM [47.786978666537436]
We propose a Two-Stage Prefix-Enhanced MLLM (TSPE) approach for event attribution in movie videos.
In the local stage, we introduce an interaction-aware prefix that guides the model to focus on the relevant multimodal information within a single clip.
In the global stage, we strengthen the connections between associated events using an inferential knowledge graph.
arXiv Detail & Related papers (2024-09-14T08:30:59Z) - Top-down Activity Representation Learning for Video Question Answering [4.236280446793381]
Capturing complex hierarchical human activities is crucial for achieving high-performance video question answering (VideoQA)
We convert long-term video sequences into a spatial image domain and finetune the multimodal model LLaVA for the VideoQA task.
Our approach achieves competitive performance on the STAR task, in particular, with a 78.4% accuracy score, exceeding the current state-of-the-art score by 2.8 points on the NExTQA task.
arXiv Detail & Related papers (2024-09-12T04:43:27Z) - Multi-object event graph representation learning for Video Question Answering [4.236280446793381]
We propose a contrastive language event graph representation learning method called CLanG to address this limitation.
Our method outperforms a strong baseline, achieving up to 2.2% higher accuracy on two challenging VideoQA, NExT-QA and TGIF-QA-R datasets.
arXiv Detail & Related papers (2024-09-12T04:42:51Z) - Enhancing Long Video Understanding via Hierarchical Event-Based Memory [9.800516656566774]
We propose a Hierarchical Event-based Memory-enhanced LLM (HEM-LLM) for better understanding of long videos.
Firstly, we design a novel adaptive sequence segmentation scheme to divide multiple events within long videos.
Secondly, while modeling current event, we compress and inject the information of the previous event to enhance the long-term inter-event dependencies in videos.
arXiv Detail & Related papers (2024-09-10T07:53:10Z) - MIST: Multi-modal Iterative Spatial-Temporal Transformer for Long-form
Video Question Answering [73.61182342844639]
We introduce a new model named Multi-modal Iterative Spatial-temporal Transformer (MIST) to better adapt pre-trained models for long-form VideoQA.
MIST decomposes traditional dense spatial-temporal self-attention into cascaded segment and region selection modules.
Visual concepts at different granularities are then processed efficiently through an attention module.
arXiv Detail & Related papers (2022-12-19T15:05:40Z) - Bridge-Prompt: Towards Ordinal Action Understanding in Instructional
Videos [92.18898962396042]
We propose a prompt-based framework, Bridge-Prompt, to model the semantics across adjacent actions.
We reformulate the individual action labels as integrated text prompts for supervision, which bridge the gap between individual action semantics.
Br-Prompt achieves state-of-the-art on multiple benchmarks.
arXiv Detail & Related papers (2022-03-26T15:52:27Z) - Dense-Caption Matching and Frame-Selection Gating for Temporal
Localization in VideoQA [96.10612095576333]
We propose a video question answering model which effectively integrates multi-modal input sources and finds the temporally relevant information to answer questions.
Our model is also comprised of dual-level attention (word/object and frame level), multi-head self-cross-integration for different sources (video and dense captions), and which pass more relevant information to gates.
We evaluate our model on the challenging TVQA dataset, where each of our model components provides significant gains, and our overall model outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2020-05-13T16:35:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.