A Circular-Structured Representation for Visual Emotion Distribution
Learning
- URL: http://arxiv.org/abs/2106.12450v1
- Date: Wed, 23 Jun 2021 14:53:27 GMT
- Title: A Circular-Structured Representation for Visual Emotion Distribution
Learning
- Authors: Jingyuan Yang, Ji Lie, Leida Li, Xiumei Wang, and Xinbo Gao
- Abstract summary: We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
- Score: 82.89776298753661
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual Emotion Analysis (VEA) has attracted increasing attention recently
with the prevalence of sharing images on social networks. Since human emotions
are ambiguous and subjective, it is more reasonable to address VEA in a label
distribution learning (LDL) paradigm rather than a single-label classification
task. Different from other LDL tasks, there exist intrinsic relationships
between emotions and unique characteristics within them, as demonstrated in
psychological theories. Inspired by this, we propose a well-grounded
circular-structured representation to utilize the prior knowledge for visual
emotion distribution learning. To be specific, we first construct an Emotion
Circle to unify any emotional state within it. On the proposed Emotion Circle,
each emotion distribution is represented with an emotion vector, which is
defined with three attributes (i.e., emotion polarity, emotion type, emotion
intensity) as well as two properties (i.e., similarity, additivity). Besides,
we design a novel Progressive Circular (PC) loss to penalize the
dissimilarities between predicted emotion vector and labeled one in a
coarse-to-fine manner, which further boosts the learning process in an
emotion-specific way. Extensive experiments and comparisons are conducted on
public visual emotion distribution datasets, and the results demonstrate that
the proposed method outperforms the state-of-the-art methods.
Related papers
- Where are We in Event-centric Emotion Analysis? Bridging Emotion Role
Labeling and Appraisal-based Approaches [10.736626320566707]
The term emotion analysis in text subsumes various natural language processing tasks.
We argue that emotions and events are related in two ways.
We discuss how to incorporate psychological appraisal theories in NLP models to interpret events.
arXiv Detail & Related papers (2023-09-05T09:56:29Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - Seeking Subjectivity in Visual Emotion Distribution Learning [93.96205258496697]
Visual Emotion Analysis (VEA) aims to predict people's emotions towards different visual stimuli.
Existing methods often predict visual emotion distribution in a unified network, neglecting the inherent subjectivity in its crowd voting process.
We propose a novel textitSubjectivity Appraise-and-Match Network (SAMNet) to investigate the subjectivity in visual emotion distribution.
arXiv Detail & Related papers (2022-07-25T02:20:03Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Emotion Recognition under Consideration of the Emotion Component Process
Model [9.595357496779394]
We use the emotion component process model (CPM) by Scherer (2005) to explain emotion communication.
CPM states that emotions are a coordinated process of various subcomponents, in reaction to an event, namely the subjective feeling, the cognitive appraisal, the expression, a physiological bodily reaction, and a motivational action tendency.
We find that emotions on Twitter are predominantly expressed by event descriptions or subjective reports of the feeling, while in literature, authors prefer to describe what characters do, and leave the interpretation to the reader.
arXiv Detail & Related papers (2021-07-27T15:53:25Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.