A Blast From the Past: Personalizing Predictions of Video-Induced
Emotions using Personal Memories as Context
- URL: http://arxiv.org/abs/2008.12096v1
- Date: Thu, 27 Aug 2020 13:06:10 GMT
- Title: A Blast From the Past: Personalizing Predictions of Video-Induced
Emotions using Personal Memories as Context
- Authors: Bernd Dudzik, Joost Broekens, Mark Neerincx, Hayley Hung
- Abstract summary: We show that automatic analysis of text describing their video-triggered memories can account for variation in viewers' emotional responses.
We discuss the relevance of these findings for improving on state of the art approaches to automated affective video analysis in personalized contexts.
- Score: 5.1314912554605066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key challenge in the accurate prediction of viewers' emotional responses to
video stimuli in real-world applications is accounting for person- and
situation-specific variation. An important contextual influence shaping
individuals' subjective experience of a video is the personal memories that it
triggers in them. Prior research has found that this memory influence explains
more variation in video-induced emotions than other contextual variables
commonly used for personalizing predictions, such as viewers' demographics or
personality. In this article, we show that (1) automatic analysis of text
describing their video-triggered memories can account for variation in viewers'
emotional responses, and (2) that combining such an analysis with that of a
video's audiovisual content enhances the accuracy of automatic predictions. We
discuss the relevance of these findings for improving on state of the art
approaches to automated affective video analysis in personalized contexts.
Related papers
- Modelling Emotions in Face-to-Face Setting: The Interplay of Eye-Tracking, Personality, and Temporal Dynamics [1.4645774851707578]
In this study, we showcase how integrating eye-tracking data, temporal dynamics, and personality traits can substantially enhance the detection of both perceived and felt emotions.
Our findings inform the design of future affective computing and human-agent systems.
arXiv Detail & Related papers (2025-03-18T13:15:32Z) - SPECTRUM: Semantic Processing and Emotion-informed video-Captioning Through Retrieval and Understanding Modalities [0.7510165488300369]
This paper proposes a novel Semantic Processing and Emotion-informed video-Captioning Through Retrieval and Understanding Modalities (SPECTRUM) framework.
SPECTRUM discerns multimodal semantics and emotional themes using Visual Text Attribute Investigation (VTAI) and determines the orientation of descriptive captions.
They exploit video-to-text retrieval capabilities and the multifaceted nature of video content to estimate the emotional probabilities of candidate captions.
arXiv Detail & Related papers (2024-11-04T10:51:47Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - Affection: Learning Affective Explanations for Real-World Visual Data [50.28825017427716]
We introduce and share with the research community a large-scale dataset that contains emotional reactions and free-form textual explanations for 85,007 publicly available images.
We show that there is significant common ground to capture potentially plausible emotional responses with a large support in the subject population.
Our work paves the way for richer, more human-centric, and emotionally-aware image analysis systems.
arXiv Detail & Related papers (2022-10-04T22:44:17Z) - Predicting emotion from music videos: exploring the relative
contribution of visual and auditory information to affective responses [0.0]
We present MusicVideos (MuVi), a novel dataset for affective multimedia content analysis.
The data were collected by presenting music videos to participants in three conditions: music, visual, and audiovisual.
arXiv Detail & Related papers (2022-02-19T07:36:43Z) - Audio-Driven Emotional Video Portraits [79.95687903497354]
We present Emotional Video Portraits (EVP), a system for synthesizing high-quality video portraits with vivid emotional dynamics driven by audios.
Specifically, we propose the Cross-Reconstructed Emotion Disentanglement technique to decompose speech into two decoupled spaces.
With the disentangled features, dynamic 2D emotional facial landmarks can be deduced.
Then we propose the Target-Adaptive Face Synthesis technique to generate the final high-quality video portraits.
arXiv Detail & Related papers (2021-04-15T13:37:13Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - AttendAffectNet: Self-Attention based Networks for Predicting Affective
Responses from Movies [16.45955178108593]
We propose different variants of the self-attention based network for emotion prediction from movies, which we call AttendAffectNet.
We take both audio and video into account and incorporate the relation among multiple modalities by applying self-attention mechanism in a novel manner into the extracted features for emotion prediction.
Our results show that applying the self-attention mechanism on the different audio-visual features, rather than in the time domain, is more effective for emotion prediction.
arXiv Detail & Related papers (2020-10-21T05:13:24Z) - Context Based Emotion Recognition using EMOTIC Dataset [22.631542327834595]
We present EMOTIC, a dataset of images of people annotated with their apparent emotion.
Using the EMOTIC dataset we train different CNN models for emotion recognition.
Our results show how scene context provides important information to automatically recognize emotional states.
arXiv Detail & Related papers (2020-03-30T12:38:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.