MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie
Understanding
- URL: http://arxiv.org/abs/2312.04817v1
- Date: Fri, 8 Dec 2023 03:33:38 GMT
- Title: MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie
Understanding
- Authors: Hongjie Zhang, Yi Liu, Lu Dong, Yifei Huang, Zhen-Hua Ling, Yali Wang,
Limin Wang, Yu Qiao
- Abstract summary: We introduce MoVQA, a long-form movie question-answering dataset.
We also benchmark to assess the diverse cognitive capabilities of multimodal systems.
- Score: 69.04413943858584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While several long-form VideoQA datasets have been introduced, the length of
both videos used to curate questions and sub-clips of clues leveraged to answer
those questions have not yet reached the criteria for genuine long-form video
understanding. Moreover, their QAs are unduly narrow and modality-biased,
lacking a wider view of understanding long-term video content with rich
dynamics and complex narratives. To remedy this, we introduce MoVQA, a
long-form movie question-answering dataset, and benchmark to assess the diverse
cognitive capabilities of multimodal systems rely on multi-level temporal
lengths, with considering both video length and clue length. Additionally, to
take a step towards human-level understanding in long-form video, versatile and
multimodal question-answering is designed from the moviegoer-perspective to
assess the model capabilities on various perceptual and cognitive axes.Through
analysis involving various baselines reveals a consistent trend: the
performance of all methods significantly deteriorate with increasing video and
clue length. Meanwhile, our established baseline method has shown some
improvements, but there is still ample scope for enhancement on our challenging
MoVQA dataset. We expect our MoVQA to provide a new perspective and encourage
inspiring works on long-form video understanding research.
Related papers
- LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding [41.9477837230283]
LongVideoBench is a question-answering benchmark that features video-language interleaved inputs up to an hour long.
Our benchmark includes 3,763 varying-length web-collected videos with their subtitles across diverse themes.
We formulate a novel video question-answering task termed referring reasoning.
arXiv Detail & Related papers (2024-07-22T16:00:55Z) - MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding [67.56182262082729]
We introduce MMBench-Video, a quantitative benchmark to rigorously evaluate large vision-language models (LVLMs) in video understanding.
MMBench-Video incorporates lengthy videos from YouTube and employs free-form questions, mirroring practical use cases.
The benchmark is meticulously crafted to probe the models' temporal reasoning skills, with all questions human-annotated according to a carefully constructed ability taxonomy.
arXiv Detail & Related papers (2024-06-20T17:26:01Z) - LVBench: An Extreme Long Video Understanding Benchmark [37.22510741049044]
We introduce LVBench, a benchmark specifically designed for long video understanding.
Our dataset comprises publicly sourced videos and encompasses a diverse set of tasks aimed at long video comprehension and information extraction.
arXiv Detail & Related papers (2024-06-12T09:36:52Z) - CinePile: A Long Video Question Answering Dataset and Benchmark [58.08209212057164]
Current datasets for long-form video understanding often fall short of providing genuine long-form comprehension challenges.
We present a novel dataset and benchmark, CinePile, specifically designed for authentic long-form video understanding.
arXiv Detail & Related papers (2024-05-14T17:59:02Z) - MVBench: A Comprehensive Multi-modal Video Understanding Benchmark [63.14000659130736]
We introduce a comprehensive Multi-modal Video understanding Benchmark, namely MVBench.
We first introduce a novel static-to-dynamic method to define these temporal-related tasks.
Then, guided by the task definition, we automatically convert public video annotations into multiple-choice QA to evaluate each task.
arXiv Detail & Related papers (2023-11-28T17:59:04Z) - Query-aware Long Video Localization and Relation Discrimination for Deep
Video Understanding [15.697251303126874]
Deep Video Understanding (DVU) Challenge aims to push the boundaries of multimodal extraction, fusion, and analytics.
This paper introduces a query-aware method for long video localization and relation discrimination, leveraging an imagelanguage pretrained model.
Our approach achieved first and fourth positions for two groups of movie-level queries.
arXiv Detail & Related papers (2023-10-19T13:26:02Z) - MIST: Multi-modal Iterative Spatial-Temporal Transformer for Long-form
Video Question Answering [73.61182342844639]
We introduce a new model named Multi-modal Iterative Spatial-temporal Transformer (MIST) to better adapt pre-trained models for long-form VideoQA.
MIST decomposes traditional dense spatial-temporal self-attention into cascaded segment and region selection modules.
Visual concepts at different granularities are then processed efficiently through an attention module.
arXiv Detail & Related papers (2022-12-19T15:05:40Z) - Video Question Answering: Datasets, Algorithms and Challenges [99.9179674610955]
Video Question Answering (VideoQA) aims to answer natural language questions according to the given videos.
This paper provides a clear taxonomy and comprehensive analyses to VideoQA, focusing on the datasets, algorithms, and unique challenges.
arXiv Detail & Related papers (2022-03-02T16:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.