AffectGAN: Affect-Based Generative Art Driven by Semantics
- URL: http://arxiv.org/abs/2109.14845v1
- Date: Thu, 30 Sep 2021 04:53:25 GMT
- Title: AffectGAN: Affect-Based Generative Art Driven by Semantics
- Authors: Theodoros Galanos, Antonios Liapis, Georgios N. Yannakakis
- Abstract summary: This paper introduces a novel method for generating artistic images that express particular affective states.
Our AffectGAN model is able to generate images based on specific or broad semantic prompts and intended affective outcomes.
A small dataset of 32 images generated by AffectGAN is annotated by 50 participants in terms of the particular emotion they elicit, as well as their quality and novelty.
- Score: 2.323282558557423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel method for generating artistic images that
express particular affective states. Leveraging state-of-the-art deep learning
methods for visual generation (through generative adversarial networks),
semantic models from OpenAI, and the annotated dataset of the visual art
encyclopedia WikiArt, our AffectGAN model is able to generate images based on
specific or broad semantic prompts and intended affective outcomes. A small
dataset of 32 images generated by AffectGAN is annotated by 50 participants in
terms of the particular emotion they elicit, as well as their quality and
novelty. Results show that for most instances the intended emotion used as a
prompt for image generation matches the participants' responses. This
small-scale study brings forth a new vision towards blending affective
computing with computational creativity, enabling generative systems with
intentionality in terms of the emotions they wish their output to elicit.
Related papers
- Emotional Images: Assessing Emotions in Images and Potential Biases in Generative Models [0.0]
This paper examines potential biases and inconsistencies in emotional evocation of images produced by generative artificial intelligence (AI) models.
We compare the emotions evoked by an AI-produced image to the emotions evoked by prompts used to create those images.
Findings indicate that AI-generated images frequently lean toward negative emotional content, regardless of the original prompt.
arXiv Detail & Related papers (2024-11-08T21:42:50Z) - Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - StyleEDL: Style-Guided High-order Attention Network for Image Emotion
Distribution Learning [69.06749934902464]
We propose a style-guided high-order attention network for image emotion distribution learning termed StyleEDL.
StyleEDL interactively learns stylistic-aware representations of images by exploring the hierarchical stylistic information of visual contents.
In addition, we introduce a stylistic graph convolutional network to dynamically generate the content-dependent emotion representations.
arXiv Detail & Related papers (2023-08-06T03:22:46Z) - Language Does More Than Describe: On The Lack Of Figurative Speech in
Text-To-Image Models [63.545146807810305]
Text-to-image diffusion models can generate high-quality pictures from textual input prompts.
These models have been trained using text data collected from content-based labelling protocols.
We characterise the sentimentality, objectiveness and degree of abstraction of publicly available text data used to train current text-to-image diffusion models.
arXiv Detail & Related papers (2022-10-19T14:20:05Z) - ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer [59.05857591535986]
We propose a model called ViNTER to generate image narratives that focus on time series representing varying emotions as "emotion arcs"
We present experimental results of both manual and automatic evaluations.
arXiv Detail & Related papers (2022-02-15T10:53:08Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - ArtEmis: Affective Language for Visual Art [46.643106054408285]
We focus on the affective experience triggered by visual artworks.
We ask the annotators to indicate the dominant emotion they feel for a given image.
This leads to a rich set of signals for both the objective content and the affective impact of an image.
arXiv Detail & Related papers (2021-01-19T01:03:40Z) - Interpretable Image Emotion Recognition: A Domain Adaptation Approach Using Facial Expressions [11.808447247077902]
This paper proposes a feature-based domain adaptation technique for identifying emotions in generic images.
It addresses the challenge of the limited availability of pre-trained models and well-annotated datasets for Image Emotion Recognition (IER)
The proposed IER system demonstrated emotion classification accuracies of 60.98% for the IAPSa dataset, 58.86% for the ArtPhoto dataset, 69.13% for the FI dataset, and 58.06% for the EMOTIC dataset.
arXiv Detail & Related papers (2020-11-17T02:55:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.