Shot2Story20K: A New Benchmark for Comprehensive Understanding of
Multi-shot Videos
- URL: http://arxiv.org/abs/2312.10300v2
- Date: Tue, 19 Dec 2023 02:04:18 GMT
- Title: Shot2Story20K: A New Benchmark for Comprehensive Understanding of
Multi-shot Videos
- Authors: Mingfei Han, Linjie Yang, Xiaojun Chang, Heng Wang
- Abstract summary: We present a new multi-shot video understanding benchmark Shot2Story20K with detailed shot-level captions and comprehensive video summaries.
Preliminary experiments show some challenges to generate a long and comprehensive video summary.
- Score: 58.13927287437394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A short clip of video may contain progression of multiple events and an
interesting story line. A human need to capture both the event in every shot
and associate them together to understand the story behind it. In this work, we
present a new multi-shot video understanding benchmark Shot2Story20K with
detailed shot-level captions and comprehensive video summaries. To facilitate
better semantic understanding of videos, we provide captions for both visual
signals and human narrations. We design several distinct tasks including
single-shot video and narration captioning, multi-shot video summarization, and
video retrieval with shot descriptions. Preliminary experiments show some
challenges to generate a long and comprehensive video summary. Nevertheless,
the generated imperfect summaries can already significantly boost the
performance of existing video understanding tasks such as video
question-answering, promoting an under-explored setting of video understanding
with detailed summaries.
Related papers
- Storyboard guided Alignment for Fine-grained Video Action Recognition [32.02631248389487]
Fine-grained video action recognition can be conceptualized as a video-text matching problem.
We propose a multi-granularity framework based on two observations: (i) videos with different global semantics may share similar atomic actions or appearances, and (ii) atomic actions within a video can be momentary, slow, or even non-directly related to the global video semantics.
arXiv Detail & Related papers (2024-10-18T07:40:41Z) - AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark [73.62572976072578]
We propose AuroraCap, a video captioner based on a large multimodal model.
We implement the token merging strategy, reducing the number of input visual tokens.
AuroraCap shows superior performance on various video and image captioning benchmarks.
arXiv Detail & Related papers (2024-10-04T00:13:54Z) - HierVL: Learning Hierarchical Video-Language Embeddings [108.77600799637172]
HierVL is a novel hierarchical video-language embedding that simultaneously accounts for both long-term and short-term associations.
We introduce a hierarchical contrastive training objective that encourages text-visual alignment at both the clip level and video level.
Our hierarchical scheme yields a clip representation that outperforms its single-level counterpart as well as a long-term video representation that achieves SotA.
arXiv Detail & Related papers (2023-01-05T21:53:19Z) - TL;DW? Summarizing Instructional Videos with Task Relevance &
Cross-Modal Saliency [133.75876535332003]
We focus on summarizing instructional videos, an under-explored area of video summarization.
Existing video summarization datasets rely on manual frame-level annotations.
We propose an instructional video summarization network that combines a context-aware temporal video encoder and a segment scoring transformer.
arXiv Detail & Related papers (2022-08-14T04:07:40Z) - Towards Diverse Paragraph Captioning for Untrimmed Videos [40.205433926432434]
Existing approaches mainly solve the problem in two steps: event detection and then event captioning.
We propose a paragraph captioning model which eschews the problematic event detection stage and directly generates paragraphs for untrimmed videos.
arXiv Detail & Related papers (2021-05-30T09:28:43Z) - Spoken Moments: Learning Joint Audio-Visual Representations from Video
Descriptions [75.77044856100349]
We present the Spoken Moments dataset of 500k spoken captions each attributed to a unique short video depicting a broad range of different events.
We show that our AMM approach consistently improves our results and that models trained on our Spoken Moments dataset generalize better than those trained on other video-caption datasets.
arXiv Detail & Related papers (2021-05-10T16:30:46Z) - Text Synopsis Generation for Egocentric Videos [72.52130695707008]
We propose to generate a textual synopsis, consisting of a few sentences describing the most important events in a long egocentric videos.
Users can read the short text to gain insight about the video, and more importantly, efficiently search through the content of a large video database.
arXiv Detail & Related papers (2020-05-08T00:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.