Mining multi-modal communication patterns in interaction with
explainable and non-explainable robots
- URL: http://arxiv.org/abs/2312.14634v1
- Date: Fri, 22 Dec 2023 12:12:55 GMT
- Title: Mining multi-modal communication patterns in interaction with
explainable and non-explainable robots
- Authors: Suna Bensch and Amanda Eriksson
- Abstract summary: We investigate interaction patterns for humans interacting with explainable and non-explainable robots.
We video recorded and analyzed human behavior during a board game, where 20 humans verbally instructed either an explainable or non-explainable Pepper robot to move objects on the board.
- Score: 0.138120109831448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate interaction patterns for humans interacting with explainable
and non-explainable robots. Non-explainable robots are here robots that do not
explain their actions or non-actions, neither do they give any other feedback
during interaction, in contrast to explainable robots. We video recorded and
analyzed human behavior during a board game, where 20 humans verbally
instructed either an explainable or non-explainable Pepper robot to move
objects on the board. The transcriptions and annotations of the videos were
transformed into transactions for association rule mining. Association rules
discovered communication patterns in the interaction between the robots and the
humans, and the most interesting rules were also tested with regular chi-square
tests. Some statistically significant results are that there is a strong
correlation between men and non-explainable robots and women and explainable
robots, and that humans mirror some of the robot's modality. Our results also
show that it is important to contextualize human interaction patterns, and that
this can be easily done using association rules as an investigative tool. The
presented results are important when designing robots that should adapt their
behavior to become understandable for the interacting humans.
Related papers
- Singing the Body Electric: The Impact of Robot Embodiment on User
Expectations [7.408858358967414]
Users develop mental models of robots to conceptualize what kind of interactions they can have with those robots.
conceptualizations are often formed before interactions with the robot and are based only on observing the robot's physical design.
We propose to use multimodal features of robot embodiments to predict what kinds of expectations users will have about a given robot's social and physical capabilities.
arXiv Detail & Related papers (2024-01-13T04:42:48Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Explain yourself! Effects of Explanations in Human-Robot Interaction [10.389325878657697]
Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust.
The effects on human perceptions of robots that explain their decisions have not been studied thoroughly.
This study demonstrates the need for and potential of explainable human-robot interaction.
arXiv Detail & Related papers (2022-04-09T15:54:27Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - A proxemics game between festival visitors and an industrial robot [1.2599533416395767]
Nonverbal behaviours of collaboration partners in human-robot teams influence the experience of the human interaction partners.
During the Ars Electronica 2020 Festival for Art, Technology and Society (Linz, Austria), we invited visitors to interact with an industrial robot.
We investigated general nonverbal behaviours of the humans interacting with the robot, as well as nonverbal behaviours of people in the audience.
arXiv Detail & Related papers (2021-05-28T13:26:00Z) - Integrating Intrinsic and Extrinsic Explainability: The Relevance of
Understanding Neural Networks for Human-Robot Interaction [19.844084722919764]
Explainable artificial intelligence (XAI) can help foster trust in and acceptance of intelligent and autonomous systems.
NICO, an open-source humanoid robot platform, is introduced and how the interaction of intrinsic explanations by the robot itself and extrinsic explanations provided by the environment enable efficient robotic behavior.
arXiv Detail & Related papers (2020-10-09T14:28:48Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.