Empathic AI Painter: A Computational Creativity System with Embodied
Conversational Interaction
- URL: http://arxiv.org/abs/2005.14223v1
- Date: Thu, 28 May 2020 18:35:42 GMT
- Title: Empathic AI Painter: A Computational Creativity System with Embodied
Conversational Interaction
- Authors: Ozge Nilay Yalcin, Nouf Abukhodair and Steve DiPaola
- Abstract summary: This paper documents our attempt to computationally model the creative process of a portrait painter.
Our system includes an empathic conversational interaction component to capture the dominant personality category of the user.
A generative AI Portraiture system that uses this categorization to create a personalized stylization of the user's portrait.
- Score: 3.5450828190071655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a growing recognition that artists use valuable ways to understand
and work with cognitive and perceptual mechanisms to convey desired experiences
and narrative in their created artworks (DiPaola et al., 2010; Zeki, 2001).
This paper documents our attempt to computationally model the creative process
of a portrait painter, who relies on understanding human traits (i.e.,
personality and emotions) to inform their art. Our system includes an empathic
conversational interaction component to capture the dominant personality
category of the user and a generative AI Portraiture system that uses this
categorization to create a personalized stylization of the user's portrait.
This paper includes the description of our systems and the real-time
interaction results obtained during the demonstration session of the NeurIPS
2019 Conference.
Related papers
- How Do You Perceive My Face? Recognizing Facial Expressions in Multi-Modal Context by Modeling Mental Representations [5.895694050664867]
We introduce a novel approach for facial expression classification that goes beyond simple classification tasks.
Our model accurately classifies a perceived face and synthesizes the corresponding mental representation perceived by a human when observing a face in context.
We evaluate synthesized expressions in a human study, showing that our model effectively produces approximations of human mental representations.
arXiv Detail & Related papers (2024-09-04T09:32:40Z) - MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis [65.78359025027457]
MetaDesigner revolutionizes artistic typography by leveraging the strengths of Large Language Models (LLMs) to drive a design paradigm centered around user engagement.
A comprehensive feedback mechanism harnesses insights from multimodal models and user evaluations to refine and enhance the design process iteratively.
Empirical validations highlight MetaDesigner's capability to effectively serve diverse WordArt applications, consistently producing aesthetically appealing and context-sensitive results.
arXiv Detail & Related papers (2024-06-28T11:58:26Z) - Equivalence: An analysis of artists' roles with Image Generative AI from Conceptual Art perspective through an interactive installation design practice [16.063735487844628]
This study explores how artists interact with advanced text-to-image Generative AI models.
To exemplify this framework, a case study titled "Equivalence" converts users' speech input into continuously evolving paintings.
This work aims to broaden our understanding of artists' roles and foster a deeper appreciation for the creative aspects inherent in artwork created with Image Generative AI.
arXiv Detail & Related papers (2024-04-29T02:45:23Z) - Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - AI-based artistic representation of emotions from EEG signals: a
discussion on fairness, inclusion, and aesthetics [2.6928226868848864]
We present an AI-based Brain-Computer Interface (BCI) in which humans and machines interact to express feelings artistically.
We seek to understand the dynamics of this interaction to reach better co-existence in fairness, inclusion, and aesthetics.
arXiv Detail & Related papers (2022-02-07T14:51:02Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Generating Music and Generative Art from Brain activity [0.0]
This research work introduces a computational system for creating generative art using a Brain-Computer Interface (BCI)
The generated artwork uses brain signals and concepts of geometry, color and spatial location to give complexity to the autonomous construction.
arXiv Detail & Related papers (2021-08-09T19:33:45Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - ArtEmis: Affective Language for Visual Art [46.643106054408285]
We focus on the affective experience triggered by visual artworks.
We ask the annotators to indicate the dominant emotion they feel for a given image.
This leads to a rich set of signals for both the objective content and the affective impact of an image.
arXiv Detail & Related papers (2021-01-19T01:03:40Z) - The BIRAFFE2 Experiment. Study in Bio-Reactions and Faces for
Emotion-based Personalization for AI Systems [0.0]
We present an unified paradigm allowing to capture emotional responses of different persons.
We provide a framework that can be easily used and developed for the purpose of the machine learning methods.
arXiv Detail & Related papers (2020-07-29T18:35:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.