What makes for an enjoyable protagonist? An analysis of character warmth and competence
- URL: http://arxiv.org/abs/2601.06658v1
- Date: Sat, 10 Jan 2026 19:21:59 GMT
- Title: What makes for an enjoyable protagonist? An analysis of character warmth and competence
- Authors: Hannes Rosenbusch,
- Abstract summary: Using 2,858 films and series from the Movie Scripts Corpus, we identified protagonists via AI-assisted annotation and quantified their warmth and competence.<n>Preregistered Bayesian regression analyses revealed theory-consistent but small associations between both warmth and competence and audience ratings.<n>Male protagonists were slightly less warm than female protagonists, and movies with male leads received higher ratings on average.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Drawing on psychological and literary theory, we investigated whether the warmth and competence of movie protagonists predict IMDb ratings, and whether these effects vary across genres. Using 2,858 films and series from the Movie Scripts Corpus, we identified protagonists via AI-assisted annotation and quantified their warmth and competence with the LLM_annotate package ([1]; human-LLM agreement: r = .83). Preregistered Bayesian regression analyses revealed theory-consistent but small associations between both warmth and competence and audience ratings, while genre-specific interactions did not meaningfully improve predictions. Male protagonists were slightly less warm than female protagonists, and movies with male leads received higher ratings on average (an association that was multiple times stronger than the relationships between movie ratings and warmth/competence). These findings suggest that, although audiences tend to favor warm, competent characters, the effects on movie evaluations are modest, indicating that character personality is only one of many factors shaping movie ratings. AI-assisted annotation with LLM_annotate and gpt-4.1-mini proved effective for large-scale analyses but occasionally fell short of manually generated annotations.
Related papers
- Capturing Differences in Character Representations Between Communities: An Initial Study with Fandom [0.0]
This working paper focuses on the re-interpretation of characters, an integral part of the narrative story-world.
Using online fandom as data, computational methods were applied to explore shifts in character representations between two communities.
arXiv Detail & Related papers (2024-09-17T13:24:29Z) - Multi-channel Emotion Analysis for Consensus Reaching in Group Movie Recommendation Systems [0.0]
This paper proposes a novel approach to group movie suggestions by examining emotions from three different channels.
We employ the Jaccard similarity index to match each participant's emotional preferences to prospective movie choices.
The group's consensus level is calculated using a fuzzy inference system.
arXiv Detail & Related papers (2024-04-21T21:19:31Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews [57.04431594769461]
This paper introduces a novel perspective to evaluate the personality fidelity of RPAs with psychological scales.
Experiments include various types of RPAs and LLMs, covering 32 distinct characters on 14 widely used psychological scales.
With InCharacter, we show that state-of-the-art RPAs exhibit personalities highly aligned with the human-perceived personalities of the characters, achieving an accuracy up to 80.7%.
arXiv Detail & Related papers (2023-10-27T08:42:18Z) - Using Natural Language Explanations to Rescale Human Judgments [81.66697572357477]
We propose a method to rescale ordinal annotations and explanations using large language models (LLMs)<n>We feed annotators' Likert ratings and corresponding explanations into an LLM and prompt it to produce a numeric score anchored in a scoring rubric.<n>Our method rescales the raw judgments without impacting agreement and brings the scores closer to human judgments grounded in the same scoring rubric.
arXiv Detail & Related papers (2023-05-24T06:19:14Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Finding the Right Moment: Human-Assisted Trailer Creation via Task Composition [63.842627949509414]
We focus on finding trailer moments in a movie, i.e., shots that could be potentially included in a trailer.<n>We model movies as graphs, where nodes are shots and edges denote semantic relations between them.<n>An unsupervised algorithm then traverses the graph and selects trailer moments from the movie that human judges prefer to ones selected by competitive supervised approaches.<n>Our tool allows users to select trailer shots in under 30 minutes that are superior to fully automatic methods and comparable to (exclusive) manual selection by experts.
arXiv Detail & Related papers (2021-11-16T20:50:52Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - Analyzing Gender Bias within Narrative Tropes [25.33293687534074]
We specifically investigate gender bias within a large collection of tropes.
To enable our study, we crawl tvtropes.org, an online user-created repository that contains 30K tropes associated with 1.9M examples of their occurrences across film, television, and literature.
We automatically score the "genderedness" of each trope in our TVTROPES dataset, which enables an analysis of (1) highly-gendered topics within tropes, (2) the relationship between gender bias and popular reception, and (3) how the gender of a work's creator correlates with the types of tropes that they use.
arXiv Detail & Related papers (2020-10-30T20:26:41Z) - Computational appraisal of gender representativeness in popular movies [0.0]
This article illustrates how automated computational methods may be used to scale up such empirical observations.
We specifically apply a face and gender detection algorithm on a broad set of popular movies spanning more than three decades to carry out a large-scale appraisal of the on-screen presence of women and men.
arXiv Detail & Related papers (2020-09-16T13:15:11Z) - Victim or Perpetrator? Analysis of Violent Characters Portrayals from
Movie Scripts [37.32711420774085]
Violent content in the media can influence viewers' perception of the society.
We propose that computational methods can aid in the large-scale analysis of violence in movies.
arXiv Detail & Related papers (2020-08-19T02:18:53Z) - Measuring Female Representation and Impact in Films over Time [78.5821575986965]
Women have always been underrepresented in movies and not until recently has the representation of women in movies improved.
We propose a new measure, the female cast ratio, and compare it to the commonly used Bechdel test result.
arXiv Detail & Related papers (2020-01-10T15:29:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.