Once More, With Feeling: Measuring Emotion of Acting Performances in Contemporary American Film
- URL: http://arxiv.org/abs/2411.10018v1
- Date: Fri, 15 Nov 2024 07:53:02 GMT
- Title: Once More, With Feeling: Measuring Emotion of Acting Performances in Contemporary American Film
- Authors: Naitian Zhou, David Bamman,
- Abstract summary: We apply speech emotion recognition models to a corpus of popular, contemporary American film.
We find narrative structure, diachronic shifts, and genre- and dialogue-based constraints located in spoken performances.
- Score: 11.023300194166993
- License:
- Abstract: Narrative film is a composition of writing, cinematography, editing, and performance. While much computational work has focused on the writing or visual style in film, we conduct in this paper a computational exploration of acting performance. Applying speech emotion recognition models and a variationist sociolinguistic analytical framework to a corpus of popular, contemporary American film, we find narrative structure, diachronic shifts, and genre- and dialogue-based constraints located in spoken performances.
Related papers
- Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - Dynamic Typography: Bringing Text to Life via Video Diffusion Prior [73.72522617586593]
We present an automated text animation scheme, termed "Dynamic Typography"
It deforms letters to convey semantic meaning and infuses them with vibrant movements based on user prompts.
Our technique harnesses vector graphics representations and an end-to-end optimization-based framework.
arXiv Detail & Related papers (2024-04-17T17:59:55Z) - How you feelin'? Learning Emotions and Mental States in Movie Scenes [9.368590075496149]
We formulate emotion understanding as predicting a diverse and multi-label set of emotions at the level of a movie scene.
EmoTx is a multimodal Transformer-based architecture that ingests videos, multiple characters, and dialog utterances to make joint predictions.
arXiv Detail & Related papers (2023-04-12T06:31:14Z) - Co-Writing Screenplays and Theatre Scripts with Language Models: An
Evaluation by Industry Professionals [5.7445938562326635]
Dramatron generates coherent scripts and screenplays with title, characters, story beats, location descriptions, and dialogue.
We show Dramatron's usefulness as an interactive co-creative system with a user study of 15 theatre and film industry professionals.
We discuss the suitability of Dramatron for co-creativity, ethical considerations -- including plagiarism and bias -- and participatory models for the design and deployment of such tools.
arXiv Detail & Related papers (2022-09-29T17:26:22Z) - TVShowGuess: Character Comprehension in Stories as Speaker Guessing [23.21452223968301]
We propose a new task for assessing machines' skills of understanding fictional characters in narrative stories.
The task, TVShowGuess, builds on the scripts of TV series and takes the form of guessing the anonymous main characters based on the backgrounds of the scenes and the dialogues.
Our human study supports that this form of task covers comprehension of multiple types of character persona, including understanding characters' personalities, facts and memories of personal experience.
arXiv Detail & Related papers (2022-04-16T05:15:04Z) - ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer [59.05857591535986]
We propose a model called ViNTER to generate image narratives that focus on time series representing varying emotions as "emotion arcs"
We present experimental results of both manual and automatic evaluations.
arXiv Detail & Related papers (2022-02-15T10:53:08Z) - Discourse Analysis for Evaluating Coherence in Video Paragraph Captions [99.37090317971312]
We are exploring a novel discourse based framework to evaluate the coherence of video paragraphs.
Central to our approach is the discourse representation of videos, which helps in modeling coherence of paragraphs conditioned on coherence of videos.
Our experiment results have shown that the proposed framework evaluates coherence of video paragraphs significantly better than all the baseline methods.
arXiv Detail & Related papers (2022-01-17T04:23:08Z) - Film Trailer Generation via Task Decomposition [65.16768855902268]
We model movies as graphs, where nodes are shots and edges denote semantic relations between them.
We learn these relations using joint contrastive training which leverages privileged textual information from screenplays.
An unsupervised algorithm then traverses the graph and generates trailers that human judges prefer to ones generated by competitive supervised approaches.
arXiv Detail & Related papers (2021-11-16T20:50:52Z) - Collaborative Storytelling with Human Actors and AI Narrators [2.8575516056239576]
We report on using GPT-3 citebrown 2020 to co-narrate stories.
The AI system must track plot progression and character arcs while the human actors perform scenes.
arXiv Detail & Related papers (2021-09-29T21:21:35Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - Fine-grained Emotion and Intent Learning in Movie Dialogues [1.2891210250935146]
We propose a novel large-scale emotional dialogue dataset, consisting of 1M dialogues retrieved from the OpenSubtitles corpus.
This work explains the complex pipeline used to preprocess movie subtitles and select good movie dialogues to annotate.
This scale of emotional dialogue classification has never been attempted before, both in terms of dataset size and fine-grained emotion and intent categories.
arXiv Detail & Related papers (2020-12-25T20:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.