Seeking Subjectivity in Visual Emotion Distribution Learning
- URL: http://arxiv.org/abs/2207.11875v1
- Date: Mon, 25 Jul 2022 02:20:03 GMT
- Title: Seeking Subjectivity in Visual Emotion Distribution Learning
- Authors: Jingyuan Yang, Jie Li, Leida Li, Xiumei Wang, Yuxuan Ding, and Xinbo
Gao
- Abstract summary: Visual Emotion Analysis (VEA) aims to predict people's emotions towards different visual stimuli.
Existing methods often predict visual emotion distribution in a unified network, neglecting the inherent subjectivity in its crowd voting process.
We propose a novel textitSubjectivity Appraise-and-Match Network (SAMNet) to investigate the subjectivity in visual emotion distribution.
- Score: 93.96205258496697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual Emotion Analysis (VEA), which aims to predict people's emotions
towards different visual stimuli, has become an attractive research topic
recently. Rather than a single label classification task, it is more rational
to regard VEA as a Label Distribution Learning (LDL) problem by voting from
different individuals. Existing methods often predict visual emotion
distribution in a unified network, neglecting the inherent subjectivity in its
crowd voting process. In psychology, the \textit{Object-Appraisal-Emotion}
model has demonstrated that each individual's emotion is affected by his/her
subjective appraisal, which is further formed by the affective memory. Inspired
by this, we propose a novel \textit{Subjectivity Appraise-and-Match Network
(SAMNet)} to investigate the subjectivity in visual emotion distribution. To
depict the diversity in crowd voting process, we first propose the
\textit{Subjectivity Appraising} with multiple branches, where each branch
simulates the emotion evocation process of a specific individual. Specifically,
we construct the affective memory with an attention-based mechanism to preserve
each individual's unique emotional experience. A subjectivity loss is further
proposed to guarantee the divergence between different individuals. Moreover,
we propose the \textit{Subjectivity Matching} with a matching loss, aiming at
assigning unordered emotion labels to ordered individual predictions in a
one-to-one correspondence with the Hungarian algorithm. Extensive experiments
and comparisons are conducted on public visual emotion distribution datasets,
and the results demonstrate that the proposed SAMNet consistently outperforms
the state-of-the-art methods. Ablation study verifies the effectiveness of our
method and visualization proves its interpretability.
Related papers
- Unifying the Discrete and Continuous Emotion labels for Speech Emotion
Recognition [28.881092401807894]
In paralinguistic analysis for emotion detection from speech, emotions have been identified with discrete or dimensional (continuous-valued) labels.
We propose a model to jointly predict continuous and discrete emotional attributes.
arXiv Detail & Related papers (2022-10-29T16:12:31Z) - Affect-DML: Context-Aware One-Shot Recognition of Human Affect using
Deep Metric Learning [29.262204241732565]
Existing methods assume that all emotions-of-interest are given a priori as annotated training examples.
We conceptualize one-shot recognition of emotions in context -- a new problem aimed at recognizing human affect states in finer particle level from a single support sample.
All variants of our model clearly outperform the random baseline, while leveraging the semantic scene context consistently improves the learnt representations.
arXiv Detail & Related papers (2021-11-30T10:35:20Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Affective Image Content Analysis: Two Decades Review and New
Perspectives [132.889649256384]
We will comprehensively review the development of affective image content analysis (AICA) in the recent two decades.
We will focus on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence.
We discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
arXiv Detail & Related papers (2021-06-30T15:20:56Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - EmoGraph: Capturing Emotion Correlations using Graph Networks [71.53159402053392]
We propose EmoGraph that captures the dependencies among different emotions through graph networks.
EmoGraph outperforms strong baselines, especially for macro-F1.
An experiment illustrates the captured emotion correlations can also benefit a single-label classification task.
arXiv Detail & Related papers (2020-08-21T08:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.