Metamorpheus: Interactive, Affective, and Creative Dream Narration
Through Metaphorical Visual Storytelling
- URL: http://arxiv.org/abs/2403.00632v1
- Date: Fri, 1 Mar 2024 16:09:32 GMT
- Title: Metamorpheus: Interactive, Affective, and Creative Dream Narration
Through Metaphorical Visual Storytelling
- Authors: Qian Wan, Xin Feng, Yining Bei, Zhiqi Gao, Zhicong Lu
- Abstract summary: We present Metamorpheus, an affective interface that engages users in a creative visual storytelling of emotional experiences during dreams.
The system provides metaphor suggestions, and generates visual metaphors and text depictions using generative AI models.
Our experience-centred evaluation manifests that, by interacting with Metamorpheus, users can recall their dreams in vivid detail.
- Score: 18.612468743375015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human emotions are essentially molded by lived experiences, from which we
construct personalised meaning. The engagement in such meaning-making process
has been practiced as an intervention in various psychotherapies to promote
wellness. Nevertheless, to support recollecting and recounting lived
experiences in everyday life remains under explored in HCI. It also remains
unknown how technologies such as generative AI models can facilitate the
meaning making process, and ultimately support affective mindfulness. In this
paper we present Metamorpheus, an affective interface that engages users in a
creative visual storytelling of emotional experiences during dreams.
Metamorpheus arranges the storyline based on a dream's emotional arc, and
provokes self-reflection through the creation of metaphorical images and text
depictions. The system provides metaphor suggestions, and generates visual
metaphors and text depictions using generative AI models, while users can apply
generations to recolour and re-arrange the interface to be visually affective.
Our experience-centred evaluation manifests that, by interacting with
Metamorpheus, users can recall their dreams in vivid detail, through which they
relive and reflect upon their experiences in a meaningful way.
Related papers
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer [59.05857591535986]
We propose a model called ViNTER to generate image narratives that focus on time series representing varying emotions as "emotion arcs"
We present experimental results of both manual and automatic evaluations.
arXiv Detail & Related papers (2022-02-15T10:53:08Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - ArtEmis: Affective Language for Visual Art [46.643106054408285]
We focus on the affective experience triggered by visual artworks.
We ask the annotators to indicate the dominant emotion they feel for a given image.
This leads to a rich set of signals for both the objective content and the affective impact of an image.
arXiv Detail & Related papers (2021-01-19T01:03:40Z) - Mirror Ritual: An Affective Interface for Emotional Self-Reflection [8.883733362171034]
This paper introduces a new form of real-time affective interface that engages the user in a process of conceptualisation of their emotional state.
Inspired by Barrett's Theory of Emotion Constructed, Mirror Ritual' aims to expand upon the user's accessible emotion concepts.
arXiv Detail & Related papers (2020-04-21T00:19:59Z) - Mirror Ritual: Human-Machine Co-Construction of Emotion [8.883733362171034]
Mirror Ritual is an interactive installation that challenges the existing paradigms in our understanding of human emotion and machine perception.
The audience are encouraged to make sense of the mirror's poetry by framing it with respect to their recent life experiences.
This process of affect labelling and contextualisation works to not only regulate emotion, but helps to construct the rich personal narratives that constitute human identity.
arXiv Detail & Related papers (2020-04-15T05:09:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.