Comparing Photorealistic and Animated Embodied Conversational Agents in
Serious Games: An Empirical Study on User Experience
- URL: http://arxiv.org/abs/2310.17300v1
- Date: Thu, 26 Oct 2023 10:45:26 GMT
- Title: Comparing Photorealistic and Animated Embodied Conversational Agents in
Serious Games: An Empirical Study on User Experience
- Authors: Danai Korre
- Abstract summary: Embodied conversational agents (ECAs) are paradigms of conversational user interfaces in the form of embodied characters.
This paper focuses on a study conducted to explore two distinct levels of presentation realism.
The photorealistic agents were perceived as more realistic and human-like, while the animated characters made the task feel more like a game.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Embodied conversational agents (ECAs) are paradigms of conversational user
interfaces in the form of embodied characters. While ECAs offer various
manipulable features, this paper focuses on a study conducted to explore two
distinct levels of presentation realism. The two agent versions are
photorealistic and animated. The study aims to provide insights and design
suggestions for speech-enabled ECAs within serious game environments. A
within-subjects, two-by-two factorial design was employed for this research
with a cohort of 36 participants balanced for gender. The results showed that
both the photorealistic and the animated versions were perceived as highly
usable, with overall mean scores of 5.76 and 5.71, respectively. However, 69.4
per cent of the participants stated they preferred the photorealistic version,
25 per cent stated they preferred the animated version and 5.6 per cent had no
stated preference. The photorealistic agents were perceived as more realistic
and human-like, while the animated characters made the task feel more like a
game. Even though the agents' realism had no significant effect on usability,
it positively influenced participants' perceptions of the agent. This research
aims to lay the groundwork for future studies on ECA realism's impact in
serious games across diverse contexts.
Related papers
- Evaluating Multiview Object Consistency in Humans and Image Models [68.36073530804296]
We leverage an experimental design from the cognitive sciences which requires zero-shot visual inferences about object shape.
We collect 35K trials of behavioral data from over 500 participants.
We then evaluate the performance of common vision models.
arXiv Detail & Related papers (2024-09-09T17:59:13Z) - The Curious Case of Representational Alignment: Unravelling Visio-Linguistic Tasks in Emergent Communication [1.3499500088995464]
We assess the representational alignment between agent image representations and agent representations and input images.
We identify a strong relationship between inter-agent alignment and topographic similarity, a common metric for compositionality.
Our findings emphasise the key role representational alignment plays in simulations of language emergence.
arXiv Detail & Related papers (2024-07-25T11:29:27Z) - AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game [12.384945632524424]
This paper focuses on creating proxies of human behavior in simulated environments, with Among Us utilized as a tool for studying simulated human behavior.
Our work demonstrates that state-of-the-art large language models (LLMs) can effectively grasp the game rules and make decisions based on the current context.
arXiv Detail & Related papers (2024-07-23T14:34:38Z) - Can we truly transfer an actor's genuine happiness to avatars? An
investigation into virtual, real, posed and spontaneous faces [0.7182245711235297]
This study aims to evaluate Ekman's action units in datasets of real human faces, posed and spontaneous, and virtual human faces.
We also conducted a case study with specific movie characters, such as SheHulk and Genius.
This investigation can help several areas of knowledge, whether using real or virtual human beings, in education, health, entertainment, games, security, and even legal matters.
arXiv Detail & Related papers (2023-12-04T18:53:42Z) - Spoken Humanoid Embodied Conversational Agents in Mobile Serious Games: A Usability Assessment [0.0]
The aim of the research is to assess the impact of multiple agents and illusion of humanness on the quality of the interaction.
The experiment investigates two styles of agent presentation: an agent of high human-likeness (HECA) and an agent of low human-likeness (text)
arXiv Detail & Related papers (2023-09-14T15:02:05Z) - Learning Action-Effect Dynamics from Pairs of Scene-graphs [50.72283841720014]
We propose a novel method that leverages scene-graph representation of images to reason about the effects of actions described in natural language.
Our proposed approach is effective in terms of performance, data efficiency, and generalization capability compared to existing models.
arXiv Detail & Related papers (2022-12-07T03:36:37Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer [59.05857591535986]
We propose a model called ViNTER to generate image narratives that focus on time series representing varying emotions as "emotion arcs"
We present experimental results of both manual and automatic evaluations.
arXiv Detail & Related papers (2022-02-15T10:53:08Z) - Navigating the Landscape of Multiplayer Games [20.483315340460127]
We show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games.
We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another.
arXiv Detail & Related papers (2020-05-04T16:58:17Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.