StyleEDL: Style-Guided High-order Attention Network for Image Emotion
Distribution Learning
- URL: http://arxiv.org/abs/2308.03000v1
- Date: Sun, 6 Aug 2023 03:22:46 GMT
- Title: StyleEDL: Style-Guided High-order Attention Network for Image Emotion
Distribution Learning
- Authors: Peiguang Jing, Xianyi Liu, Ji Wang, Yinwei Wei, Liqiang Nie, Yuting Su
- Abstract summary: We propose a style-guided high-order attention network for image emotion distribution learning termed StyleEDL.
StyleEDL interactively learns stylistic-aware representations of images by exploring the hierarchical stylistic information of visual contents.
In addition, we introduce a stylistic graph convolutional network to dynamically generate the content-dependent emotion representations.
- Score: 69.06749934902464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotion distribution learning has gained increasing attention with the
tendency to express emotions through images. As for emotion ambiguity arising
from humans' subjectivity, substantial previous methods generally focused on
learning appropriate representations from the holistic or significant part of
images. However, they rarely consider establishing connections with the
stylistic information although it can lead to a better understanding of images.
In this paper, we propose a style-guided high-order attention network for image
emotion distribution learning termed StyleEDL, which interactively learns
stylistic-aware representations of images by exploring the hierarchical
stylistic information of visual contents. Specifically, we consider exploring
the intra- and inter-layer correlations among GRAM-based stylistic
representations, and meanwhile exploit an adversary-constrained high-order
attention mechanism to capture potential interactions between subtle visual
parts. In addition, we introduce a stylistic graph convolutional network to
dynamically generate the content-dependent emotion representations to benefit
the final emotion distribution learning. Extensive experiments conducted on
several benchmark datasets demonstrate the effectiveness of our proposed
StyleEDL compared to state-of-the-art methods. The implementation is released
at: https://github.com/liuxianyi/StyleEDL.
Related papers
- Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - ALADIN-NST: Self-supervised disentangled representation learning of
artistic style through Neural Style Transfer [60.6863849241972]
We learn a representation of visual artistic style more strongly disentangled from the semantic content depicted in an image.
We show that strongly addressing the disentanglement of style and content leads to large gains in style-specific metrics.
arXiv Detail & Related papers (2023-04-12T10:33:18Z) - SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense
Reasoning [61.57887011165744]
multimodal Transformers have made great progress in the task of Visual Commonsense Reasoning.
We propose a Scene Graph Enhanced Image-Text Learning framework to incorporate visual scene graphs in commonsense reasoning.
arXiv Detail & Related papers (2021-12-16T03:16:30Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - AffectGAN: Affect-Based Generative Art Driven by Semantics [2.323282558557423]
This paper introduces a novel method for generating artistic images that express particular affective states.
Our AffectGAN model is able to generate images based on specific or broad semantic prompts and intended affective outcomes.
A small dataset of 32 images generated by AffectGAN is annotated by 50 participants in terms of the particular emotion they elicit, as well as their quality and novelty.
arXiv Detail & Related papers (2021-09-30T04:53:25Z) - Exploring Visual Engagement Signals for Representation Learning [56.962033268934015]
We present VisE, a weakly supervised learning approach, which maps social images to pseudo labels derived by clustered engagement signals.
We then study how models trained in this way benefit subjective downstream computer vision tasks such as emotion recognition or political bias detection.
arXiv Detail & Related papers (2021-04-15T20:50:40Z) - ArtEmis: Affective Language for Visual Art [46.643106054408285]
We focus on the affective experience triggered by visual artworks.
We ask the annotators to indicate the dominant emotion they feel for a given image.
This leads to a rich set of signals for both the objective content and the affective impact of an image.
arXiv Detail & Related papers (2021-01-19T01:03:40Z) - Interpretable Image Emotion Recognition: A Domain Adaptation Approach Using Facial Expressions [11.808447247077902]
This paper proposes a feature-based domain adaptation technique for identifying emotions in generic images.
It addresses the challenge of the limited availability of pre-trained models and well-annotated datasets for Image Emotion Recognition (IER)
The proposed IER system demonstrated emotion classification accuracies of 60.98% for the IAPSa dataset, 58.86% for the ArtPhoto dataset, 69.13% for the FI dataset, and 58.06% for the EMOTIC dataset.
arXiv Detail & Related papers (2020-11-17T02:55:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.