Screenplay Quality Assessment: Can We Predict Who Gets Nominated?
- URL: http://arxiv.org/abs/2005.06123v1
- Date: Wed, 13 May 2020 02:39:56 GMT
- Title: Screenplay Quality Assessment: Can We Predict Who Gets Nominated?
- Authors: Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan
- Abstract summary: We present a method to evaluate the quality of a screenplay based on linguistic cues.
Based on industry opinions and narratology, we extract and integrate domain-specific features into common classification techniques.
- Score: 53.9153892362629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deciding which scripts to turn into movies is a costly and time-consuming
process for filmmakers. Thus, building a tool to aid script selection, an
initial phase in movie production, can be very beneficial. Toward that goal, in
this work, we present a method to evaluate the quality of a screenplay based on
linguistic cues. We address this in a two-fold approach: (1) we define the task
as predicting nominations of scripts at major film awards with the hypothesis
that the peer-recognized scripts should have a greater chance to succeed. (2)
based on industry opinions and narratology, we extract and integrate
domain-specific features into common classification techniques. We face two
challenges (1) scripts are much longer than other document datasets (2)
nominated scripts are limited and thus difficult to collect. However, with
narratology-inspired modeling and domain features, our approach offers clear
improvements over strong baselines. Our work provides a new approach for future
work in screenplay analysis.
Related papers
- Movie101v2: Improved Movie Narration Benchmark [53.54176725112229]
We develop a large-scale, bilingual movie narration dataset, Movie101v2.
Taking into account the essential difficulties in achieving applicable movie narration, we break the long-term goal into three progressive stages.
Our findings reveal that achieving applicable movie narration generation is a fascinating goal that requires thorough research.
arXiv Detail & Related papers (2024-04-20T13:15:27Z) - Select and Summarize: Scene Saliency for Movie Script Summarization [11.318175666743656]
We introduce a scene saliency dataset that consists of human-annotated salient scenes for 100 movies.
We propose a two-stage abstractive summarization approach which first identifies the salient scenes in script and then generates a summary using only those scenes.
arXiv Detail & Related papers (2024-04-04T16:16:53Z) - MULTISCRIPT: Multimodal Script Learning for Supporting Open Domain
Everyday Tasks [28.27986773292919]
We present a new benchmark challenge -- MultiScript.
For both tasks, the input consists of a target task name and a video illustrating what has been done to complete the target task.
The expected output is (1) a sequence of structured step descriptions in text based on the demonstration video, and (2) a single text description for the subsequent step.
arXiv Detail & Related papers (2023-10-08T01:51:17Z) - MovieFactory: Automatic Movie Creation from Text using Large Generative
Models for Language and Images [92.13079696503803]
We present MovieFactory, a framework to generate cinematic-picture (3072$times$1280), film-style (multi-scene), and multi-modality (sounding) movies.
Our approach empowers users to create captivating movies with smooth transitions using simple text inputs.
arXiv Detail & Related papers (2023-06-12T17:31:23Z) - Movie101: A New Movie Understanding Benchmark [47.24519006577205]
We construct a large-scale Chinese movie benchmark, named Movie101.
We propose a new metric called Movie Narration Score (MNScore) for movie narrating evaluation.
For both two tasks, our proposed methods well leverage external knowledge and outperform carefully designed baselines.
arXiv Detail & Related papers (2023-05-20T08:43:51Z) - Movie Genre Classification by Language Augmentation and Shot Sampling [20.119729119879466]
We propose a Movie genre Classification method based on Language augmentatIon and shot samPling (Movie-CLIP)
Movie-CLIP mainly consists of two parts: a language augmentation module to recognize language elements from the input audio, and a shot sampling module to select representative shots from the entire video.
We evaluate our method on MovieNet and Condensed Movies datasets, achieving approximate 6-9% improvement in mean Average Precision (mAP) over the baselines.
arXiv Detail & Related papers (2022-03-24T18:15:12Z) - VScript: Controllable Script Generation with Audio-Visual Presentation [56.17400243061659]
VScript is a controllable pipeline that generates complete scripts including dialogues and scene descriptions.
We adopt a hierarchical structure, which generates the plot, then the script and its audio-visual presentation.
Experiment results show that our approach outperforms the baselines on both automatic and human evaluations.
arXiv Detail & Related papers (2022-03-01T09:43:02Z) - proScript: Partially Ordered Scripts Generation via Pre-trained Language
Models [49.03193243699244]
We demonstrate for the first time that pre-trained neural language models (LMs) can be finetuned to generate high-quality scripts.
We collected a large (6.4k), crowdsourced partially ordered scripts (named proScript)
Our experiments show that our models perform well (e.g., F1=75.7 in task (i)), illustrating a new approach to overcoming previous barriers to script collection.
arXiv Detail & Related papers (2021-04-16T17:35:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.