AffectGAN: Affect-Based Generative Art Driven by Semantics
- URL: http://arxiv.org/abs/2109.14845v1
- Date: Thu, 30 Sep 2021 04:53:25 GMT
- Title: AffectGAN: Affect-Based Generative Art Driven by Semantics
- Authors: Theodoros Galanos, Antonios Liapis, Georgios N. Yannakakis
- Abstract summary: This paper introduces a novel method for generating artistic images that express particular affective states.
Our AffectGAN model is able to generate images based on specific or broad semantic prompts and intended affective outcomes.
A small dataset of 32 images generated by AffectGAN is annotated by 50 participants in terms of the particular emotion they elicit, as well as their quality and novelty.
- Score: 2.323282558557423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel method for generating artistic images that
express particular affective states. Leveraging state-of-the-art deep learning
methods for visual generation (through generative adversarial networks),
semantic models from OpenAI, and the annotated dataset of the visual art
encyclopedia WikiArt, our AffectGAN model is able to generate images based on
specific or broad semantic prompts and intended affective outcomes. A small
dataset of 32 images generated by AffectGAN is annotated by 50 participants in
terms of the particular emotion they elicit, as well as their quality and
novelty. Results show that for most instances the intended emotion used as a
prompt for image generation matches the participants' responses. This
small-scale study brings forth a new vision towards blending affective
computing with computational creativity, enabling generative systems with
intentionality in terms of the emotions they wish their output to elicit.
Related papers
- Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - Diffusion Based Augmentation for Captioning and Retrieval in Cultural
Heritage [28.301944852273746]
This paper introduces a novel approach to address the challenges of limited annotated data and domain shifts in the cultural heritage domain.
By leveraging generative vision-language models, we augment art datasets by generating diverse variations of artworks conditioned on their captions.
arXiv Detail & Related papers (2023-08-14T13:59:04Z) - StyleEDL: Style-Guided High-order Attention Network for Image Emotion
Distribution Learning [69.06749934902464]
We propose a style-guided high-order attention network for image emotion distribution learning termed StyleEDL.
StyleEDL interactively learns stylistic-aware representations of images by exploring the hierarchical stylistic information of visual contents.
In addition, we introduce a stylistic graph convolutional network to dynamically generate the content-dependent emotion representations.
arXiv Detail & Related papers (2023-08-06T03:22:46Z) - Affect-Conditioned Image Generation [0.9668407688201357]
We introduce a method for generating images conditioned on desired affect, quantified using a psychometrically validated three-component approach.
We first train a neural network for estimating the affect content of text and images from semantic embeddings, and then demonstrate how this can be used to exert control over a variety of generative models.
arXiv Detail & Related papers (2023-02-20T03:44:04Z) - Language Does More Than Describe: On The Lack Of Figurative Speech in
Text-To-Image Models [63.545146807810305]
Text-to-image diffusion models can generate high-quality pictures from textual input prompts.
These models have been trained using text data collected from content-based labelling protocols.
We characterise the sentimentality, objectiveness and degree of abstraction of publicly available text data used to train current text-to-image diffusion models.
arXiv Detail & Related papers (2022-10-19T14:20:05Z) - ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer [59.05857591535986]
We propose a model called ViNTER to generate image narratives that focus on time series representing varying emotions as "emotion arcs"
We present experimental results of both manual and automatic evaluations.
arXiv Detail & Related papers (2022-02-15T10:53:08Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - ArtEmis: Affective Language for Visual Art [46.643106054408285]
We focus on the affective experience triggered by visual artworks.
We ask the annotators to indicate the dominant emotion they feel for a given image.
This leads to a rich set of signals for both the objective content and the affective impact of an image.
arXiv Detail & Related papers (2021-01-19T01:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.