Learning Similarity between Movie Characters and Its Potential
Implications on Understanding Human Experiences
- URL: http://arxiv.org/abs/2010.12183v2
- Date: Wed, 12 May 2021 06:35:18 GMT
- Title: Learning Similarity between Movie Characters and Its Potential
Implications on Understanding Human Experiences
- Authors: Zhilin Wang, Weizhe Lin, Xiaodong Wu
- Abstract summary: We propose a new task to capture this richness based on an unlikely setting: movie characters.
We sought to capture theme-level similarities between movie characters that were community-curated into 20,000 themes.
- Score: 7.1282254016123305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While many different aspects of human experiences have been studied by the
NLP community, none has captured its full richness. We propose a new task to
capture this richness based on an unlikely setting: movie characters. We sought
to capture theme-level similarities between movie characters that were
community-curated into 20,000 themes. By introducing a two-step approach that
balances performance and efficiency, we managed to achieve 9-27\% improvement
over recent paragraph-embedding based methods. Finally, we demonstrate how the
thematic information learnt from movie characters can potentially be used to
understand themes in the experience of people, as indicated on Reddit posts.
Related papers
- Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs [30.636456219922906]
Empathy serves as a cornerstone in enabling prosocial behaviors, and can be evoked through sharing of personal experiences in stories.
While empathy is influenced by narrative content, intuitively, people respond to the way a story is told as well, through narrative style.
We empirically examine and quantify this relationship between style and empathy using LLMs and large-scale crowdsourcing studies.
arXiv Detail & Related papers (2024-05-27T20:00:38Z) - Personality Understanding of Fictional Characters during Book Reading [81.68515671674301]
We present the first labeled dataset PersoNet for this problem.
Our novel annotation strategy involves annotating user notes from online reading apps as a proxy for the original books.
Experiments and human studies indicate that our dataset construction is both efficient and accurate.
arXiv Detail & Related papers (2023-05-17T12:19:11Z) - Few-Shot Character Understanding in Movies as an Assessment to
Meta-Learning of Theory-of-Mind [47.13015852330866]
Humans can quickly understand new fictional characters with a few observations, mainly by drawing analogies to fictional and real people they already know.
This reflects the few-shot and meta-learning essence of humans' inference of characters' mental states, i.e., theory-of-mind (ToM)
We fill this gap with a novel NLP dataset, ToM-in-AMC, the first assessment of machines' meta-learning of ToM in a realistic narrative understanding scenario.
arXiv Detail & Related papers (2022-11-09T05:06:12Z) - On Negative Sampling for Audio-Visual Contrastive Learning from Movies [12.967364755951722]
We study the efficacy of audio-visual self-supervised learning from uncurated long-form content i.e. movies.
Our empirical findings suggest that, with certain modifications, training on uncurated long-form videos yields representations which transfer competitively with the state-of-the-art.
arXiv Detail & Related papers (2022-04-29T20:36:13Z) - TVShowGuess: Character Comprehension in Stories as Speaker Guessing [23.21452223968301]
We propose a new task for assessing machines' skills of understanding fictional characters in narrative stories.
The task, TVShowGuess, builds on the scripts of TV series and takes the form of guessing the anonymous main characters based on the backgrounds of the scenes and the dialogues.
Our human study supports that this form of task covers comprehension of multiple types of character persona, including understanding characters' personalities, facts and memories of personal experience.
arXiv Detail & Related papers (2022-04-16T05:15:04Z) - Film Trailer Generation via Task Decomposition [65.16768855902268]
We model movies as graphs, where nodes are shots and edges denote semantic relations between them.
We learn these relations using joint contrastive training which leverages privileged textual information from screenplays.
An unsupervised algorithm then traverses the graph and generates trailers that human judges prefer to ones generated by competitive supervised approaches.
arXiv Detail & Related papers (2021-11-16T20:50:52Z) - "Let Your Characters Tell Their Story": A Dataset for Character-Centric
Narrative Understanding [31.803481510886378]
We present LiSCU -- a new dataset of literary pieces and their summaries paired with descriptions of characters that appear in them.
We also introduce two new tasks on LiSCU: Character Identification and Character Description Generation.
Our experiments with several pre-trained language models adapted for these tasks demonstrate that there is a need for better models of narrative comprehension.
arXiv Detail & Related papers (2021-09-12T06:12:55Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.