StoryBench: A Multifaceted Benchmark for Continuous Story Visualization
- URL: http://arxiv.org/abs/2308.11606v2
- Date: Thu, 12 Oct 2023 17:50:38 GMT
- Title: StoryBench: A Multifaceted Benchmark for Continuous Story Visualization
- Authors: Emanuele Bugliarello, Hernan Moraldo, Ruben Villegas, Mohammad
Babaeizadeh, Mohammad Taghi Saffar, Han Zhang, Dumitru Erhan, Vittorio
Ferrari, Pieter-Jan Kindermans, Paul Voigtlaender
- Abstract summary: We introduce StoryBench: a new, challenging multi-task benchmark to reliably evaluate text-to-video models.
Our benchmark includes three video generation tasks of increasing difficulty: action execution, story continuation, and story generation.
We evaluate small yet strong text-to-video baselines, and show the benefits of training on story-like data algorithmically generated from existing video captions.
- Score: 42.439670922813434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating video stories from text prompts is a complex task. In addition to
having high visual quality, videos need to realistically adhere to a sequence
of text prompts whilst being consistent throughout the frames. Creating a
benchmark for video generation requires data annotated over time, which
contrasts with the single caption used often in video datasets. To fill this
gap, we collect comprehensive human annotations on three existing datasets, and
introduce StoryBench: a new, challenging multi-task benchmark to reliably
evaluate forthcoming text-to-video models. Our benchmark includes three video
generation tasks of increasing difficulty: action execution, where the next
action must be generated starting from a conditioning video; story
continuation, where a sequence of actions must be executed starting from a
conditioning video; and story generation, where a video must be generated from
only text prompts. We evaluate small yet strong text-to-video baselines, and
show the benefits of training on story-like data algorithmically generated from
existing video captions. Finally, we establish guidelines for human evaluation
of video stories, and reaffirm the need of better automatic metrics for video
generation. StoryBench aims at encouraging future research efforts in this
exciting new area.
Related papers
- ScreenWriter: Automatic Screenplay Generation and Movie Summarisation [55.20132267309382]
Video content has driven demand for textual descriptions or summaries that allow users to recall key plot points or get an overview without watching.
We propose the task of automatic screenplay generation, and a method, ScreenWriter, that operates only on video and produces output which includes dialogue, speaker names, scene breaks, and visual descriptions.
ScreenWriter introduces a novel algorithm to segment the video into scenes based on the sequence of visual vectors, and a novel method for the challenging problem of determining character names, based on a database of actors' faces.
arXiv Detail & Related papers (2024-10-17T07:59:54Z) - In-Style: Bridging Text and Uncurated Videos with Style Transfer for
Text-Video Retrieval [72.98185525653504]
We propose a new setting, text-video retrieval with uncurated & unpaired data, that during training utilizes only text queries together with uncurated web videos.
To improve generalization, we show that one model can be trained with multiple text styles.
We evaluate our model on retrieval performance over multiple datasets to demonstrate the advantages of our style transfer framework.
arXiv Detail & Related papers (2023-09-16T08:48:21Z) - Hierarchical Video-Moment Retrieval and Step-Captioning [68.4859260853096]
HiREST consists of 3.4K text-video pairs from an instructional video dataset.
Our hierarchical benchmark consists of video retrieval, moment retrieval, and two novel moment segmentation and step captioning tasks.
arXiv Detail & Related papers (2023-03-29T02:33:54Z) - VideoXum: Cross-modal Visual and Textural Summarization of Videos [54.0985975755278]
We propose a new joint video and text summarization task.
The goal is to generate both a shortened video clip along with the corresponding textual summary from a long video.
The generated shortened video clip and text narratives should be semantically well aligned.
arXiv Detail & Related papers (2023-03-21T17:51:23Z) - Text Synopsis Generation for Egocentric Videos [72.52130695707008]
We propose to generate a textual synopsis, consisting of a few sentences describing the most important events in a long egocentric videos.
Users can read the short text to gain insight about the video, and more importantly, efficiently search through the content of a large video database.
arXiv Detail & Related papers (2020-05-08T00:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.