Continuous ErrP detections during multimodal human-robot interaction
- URL: http://arxiv.org/abs/2207.12267v1
- Date: Mon, 25 Jul 2022 15:39:32 GMT
- Title: Continuous ErrP detections during multimodal human-robot interaction
- Authors: Su Kyoung Kim, Michael Maurus, Mathias Trampler, Marc Tabie, Elsa
Andrea Kirchner
- Abstract summary: We implement a multimodal human-robot interaction (HRI) scenario, in which a simulated robot communicates with its human partner through speech and gestures.
The human partner, in turn, evaluates whether the robot's verbal announcement (intention) matches the action (pointing gesture) chosen by the robot.
In intrinsic evaluations of robot actions by humans, evident in the EEG, were recorded in real time, continuously segmented online and classified asynchronously.
- Score: 2.5199066832791535
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Human-in-the-loop approaches are of great importance for robot applications.
In the presented study, we implemented a multimodal human-robot interaction
(HRI) scenario, in which a simulated robot communicates with its human partner
through speech and gestures. The robot announces its intention verbally and
selects the appropriate action using pointing gestures. The human partner, in
turn, evaluates whether the robot's verbal announcement (intention) matches the
action (pointing gesture) chosen by the robot. For cases where the verbal
announcement of the robot does not match the corresponding action choice of the
robot, we expect error-related potentials (ErrPs) in the human
electroencephalogram (EEG). These intrinsic evaluations of robot actions by
humans, evident in the EEG, were recorded in real time, continuously segmented
online and classified asynchronously. For feature selection, we propose an
approach that allows the combinations of forward and backward sliding windows
to train a classifier. We achieved an average classification performance of 91%
across 9 subjects. As expected, we also observed a relatively high variability
between the subjects. In the future, the proposed feature selection approach
will be extended to allow for customization of feature selection. To this end,
the best combinations of forward and backward sliding windows will be
automatically selected to account for inter-subject variability in
classification performance. In addition, we plan to use the intrinsic human
error evaluation evident in the error case by the ErrP in interactive
reinforcement learning to improve multimodal human-robot interaction.
Related papers
- Human-Robot Mutual Learning through Affective-Linguistic Interaction and Differential Outcomes Training [Pre-Print] [0.3811184252495269]
We test how affective-linguistic communication, in combination with differential outcomes training, affects mutual learning in a human-robot context.
Taking inspiration from child- caregiver dynamics, our human-robot interaction setup consists of a (simulated) robot attempting to learn how best to communicate internal, homeostatically-controlled needs.
arXiv Detail & Related papers (2024-07-01T13:35:08Z) - A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation [39.87346821309096]
We present an addressee estimation model with improved performance in comparison with the previous SOTA.
We also propose several ways to incorporate explainability and transparency in the aforementioned architecture.
arXiv Detail & Related papers (2024-05-20T13:09:32Z) - Learning Multimodal Latent Dynamics for Human-Robot Interaction [19.803547418450236]
This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI)
We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents.
We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.
arXiv Detail & Related papers (2023-11-27T23:56:59Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - A Neurorobotics Approach to Behaviour Selection based on Human Activity
Recognition [0.0]
Behaviour selection has been an active research topic for robotics, in particular in the field of human-robot interaction.
Most approaches to date consist of deterministic associations between the recognised activities and the robot behaviours.
This paper presents a neurorobotics approach based on computational models that resemble neurophysiological aspects of living beings.
arXiv Detail & Related papers (2021-07-27T01:25:58Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Affect-Driven Modelling of Robot Personality for Collaborative
Human-Robot Interactions [16.40684407420441]
Collaborative interactions require social robots to adapt to the dynamics of human affective behaviour.
We propose a novel framework for personality-driven behaviour generation in social robots.
arXiv Detail & Related papers (2020-10-14T16:34:14Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.