Let's be friends! A rapport-building 3D embodied conversational agent
for the Human Support Robot
- URL: http://arxiv.org/abs/2103.04498v1
- Date: Mon, 8 Mar 2021 01:02:41 GMT
- Title: Let's be friends! A rapport-building 3D embodied conversational agent
for the Human Support Robot
- Authors: Katarzyna Pasternak, Zishi Wu, Ubbo Visser, and Christine Lisetti
- Abstract summary: Partial subtle mirroring of nonverbal behaviors during conversations (also known as mimicking or parallel empathy) is essential for rapport building.
Our research question is whether integrating an ECA able to mirror its interlocutor's facial expressions and head movements with a human-service robot will improve the user's experience.
Our contribution is the complex integration of an expressive ECA, able to track its interlocutor's face, and to mirror his/her facial expressions and head movements in real time, integrated with a human support robot.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial subtle mirroring of nonverbal behaviors during conversations (also
known as mimicking or parallel empathy), is essential for rapport building,
which in turn is essential for optimal human-human communication outcomes.
Mirroring has been studied in interactions between robots and humans, and in
interactions between Embodied Conversational Agents (ECAs) and humans. However,
very few studies examine interactions between humans and ECAs that are
integrated with robots, and none of them examine the effect of mirroring
nonverbal behaviors in such interactions. Our research question is whether
integrating an ECA able to mirror its interlocutor's facial expressions and
head movements (continuously or intermittently) with a human-service robot will
improve the user's experience with the support robot that is able to perform
useful mobile manipulative tasks (e.g. at home). Our contribution is the
complex integration of an expressive ECA, able to track its interlocutor's
face, and to mirror his/her facial expressions and head movements in real time,
integrated with a human support robot such that the robot and the agent are
fully aware of each others', and of the users', nonverbals cues. We also
describe a pilot study we conducted towards answering our research question,
which shows promising results for our forthcoming larger user study.
Related papers
- Imitation of human motion achieves natural head movements for humanoid robots in an active-speaker detection task [2.8220015774219567]
Head movements are crucial for social human-human interaction.
In this work, we employed a generative AI pipeline to produce human-like head movements for a Nao humanoid robot.
The results show that the Nao robot successfully imitates human head movements in a natural manner while actively tracking the speakers during the conversation.
arXiv Detail & Related papers (2024-07-16T17:08:40Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - A Human-Robot Mutual Learning System with Affect-Grounded Language
Acquisition and Differential Outcomes Training [0.1812164955222814]
The paper presents a novel human-robot interaction setup for identifying robot homeostatic needs.
We adopted a differential outcomes training protocol whereby the robot provides feedback specific to its internal needs.
We found evidence that DOT can enhance the human's learning efficiency, which in turn enables more efficient robot language acquisition.
arXiv Detail & Related papers (2023-10-20T09:41:31Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Towards a Real-time Measure of the Perception of Anthropomorphism in
Human-robot Interaction [5.112850258732114]
We conducted an online human-robot interaction experiment in an educational use-case scenario.
43 English-speaking participants took part in the study.
We found that the degree of subjective and objective perception of anthropomorphism positively correlates with acoustic-prosodic entrainment.
arXiv Detail & Related papers (2022-01-24T11:10:37Z) - A MultiModal Social Robot Toward Personalized Emotion Interaction [1.2183405753834562]
This study demonstrates a multimodal human-robot interaction (HRI) framework with reinforcement learning to enhance the robotic interaction policy.
The goal is to apply this framework in social scenarios that can let the robots generate a more natural and engaging HRI framework.
arXiv Detail & Related papers (2021-10-08T00:35:44Z) - A proxemics game between festival visitors and an industrial robot [1.2599533416395767]
Nonverbal behaviours of collaboration partners in human-robot teams influence the experience of the human interaction partners.
During the Ars Electronica 2020 Festival for Art, Technology and Society (Linz, Austria), we invited visitors to interact with an industrial robot.
We investigated general nonverbal behaviours of the humans interacting with the robot, as well as nonverbal behaviours of people in the audience.
arXiv Detail & Related papers (2021-05-28T13:26:00Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.