Transferring Studies Across Embodiments: A Case Study in Confusion
Detection
- URL: http://arxiv.org/abs/2206.01493v1
- Date: Fri, 3 Jun 2022 10:41:16 GMT
- Title: Transferring Studies Across Embodiments: A Case Study in Confusion
Detection
- Authors: Na Li and Robert Ross
- Abstract summary: Human-robot studies are expensive to conduct and difficult to control.
As such researchers sometimes turn to human-avatar interaction in the hope of faster and cheaper data collection.
This work shows that despite attitudinal differences and technical control limitations, there were a number of similarities detected in user behaviour.
This work suggests that, while avatar interaction is no true substitute for robot interaction studies, sufficient care in study design may allow well executed human-avatar studies to supplement more challenging human-robot studies.
- Score: 6.997674465889922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-robot studies are expensive to conduct and difficult to control, and as
such researchers sometimes turn to human-avatar interaction in the hope of
faster and cheaper data collection that can be transferred to the robot domain.
In terms of our work, we are particularly interested in the challenge of
detecting and modelling user confusion in interaction, and as part of this
research programme, we conducted situated dialogue studies to investigate
users' reactions in confusing scenarios that we give in both physical and
virtual environments. In this paper, we present a combined review of these
studies and the results that we observed across these two embodiments. For the
physical embodiment, we used a Pepper Robot, while for the virtual modality, we
used a 3D avatar. Our study shows that despite attitudinal differences and
technical control limitations, there were a number of similarities detected in
user behaviour and self-reporting results across embodiment options. This work
suggests that, while avatar interaction is no true substitute for robot
interaction studies, sufficient care in study design may allow well executed
human-avatar studies to supplement more challenging human-robot studies.
Related papers
- Experimental Evaluation of ROS-Causal in Real-World Human-Robot Spatial Interaction Scenarios [3.8625803348911774]
We present an experimental evaluation of ROS-Causal, a ROS-based framework for causal discovery in human-robot spatial interactions.
We show how causal models can be extracted directly onboard by robots during data collection.
The online causal models generated from the simulation are consistent with those from lab experiments.
arXiv Detail & Related papers (2024-06-07T14:20:30Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Causal Discovery of Dynamic Models for Predicting Human Spatial
Interactions [5.742409080817885]
We propose an application of causal discovery methods to model human-robot spatial interactions.
New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm.
arXiv Detail & Related papers (2022-10-29T08:56:48Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Towards a Real-time Measure of the Perception of Anthropomorphism in
Human-robot Interaction [5.112850258732114]
We conducted an online human-robot interaction experiment in an educational use-case scenario.
43 English-speaking participants took part in the study.
We found that the degree of subjective and objective perception of anthropomorphism positively correlates with acoustic-prosodic entrainment.
arXiv Detail & Related papers (2022-01-24T11:10:37Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Let's be friends! A rapport-building 3D embodied conversational agent
for the Human Support Robot [0.0]
Partial subtle mirroring of nonverbal behaviors during conversations (also known as mimicking or parallel empathy) is essential for rapport building.
Our research question is whether integrating an ECA able to mirror its interlocutor's facial expressions and head movements with a human-service robot will improve the user's experience.
Our contribution is the complex integration of an expressive ECA, able to track its interlocutor's face, and to mirror his/her facial expressions and head movements in real time, integrated with a human support robot.
arXiv Detail & Related papers (2021-03-08T01:02:41Z) - Controlling the Sense of Agency in Dyadic Robot Interaction: An Active
Inference Approach [6.421670116083633]
We examine dyadic imitative interactions of robots using a variational recurrent neural network model.
We examined how regulating the complexity term to minimize free energy during training determines the dynamic characteristics of networks.
arXiv Detail & Related papers (2021-03-03T02:38:09Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.