Human strategies for correcting `human-robot' errors during a laundry sorting task
- URL: http://arxiv.org/abs/2504.08395v1
- Date: Fri, 11 Apr 2025 09:53:36 GMT
- Title: Human strategies for correcting `human-robot' errors during a laundry sorting task
- Authors: Pepita Barnard, Maria J Galvez Trigo, Dominic Price, Sue Cobb, Gisela Reyes-Cruz, Gustavo Berumen, David Branson III, Mojtaba A. Khanesar, Mercedes Torres Torres, Michel Valstar,
- Abstract summary: Video analysis from 42 participants found speech patterns, including laughter, verbal expressions, and filler words, such as oh'' and ok''<n>Common strategies deployed when errors occurred, included correcting and teaching, taking responsibility, and displays of frustration.<n>An anthropomorphic robot may not be ideally suited to this kind of task.
- Score: 3.9697512504288373
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mental models and expectations underlying human-human interaction (HHI) inform human-robot interaction (HRI) with domestic robots. To ease collaborative home tasks by improving domestic robot speech and behaviours for human-robot communication, we designed a study to understand how people communicated when failure occurs. To identify patterns of natural communication, particularly in response to robotic failures, participants instructed Laundrobot to move laundry into baskets using natural language and gestures. Laundrobot either worked error-free, or in one of two error modes. Participants were not advised Laundrobot would be a human actor, nor given information about error modes. Video analysis from 42 participants found speech patterns, included laughter, verbal expressions, and filler words, such as ``oh'' and ``ok'', also, sequences of body movements, including touching one's own face, increased pointing with a static finger, and expressions of surprise. Common strategies deployed when errors occurred, included correcting and teaching, taking responsibility, and displays of frustration. The strength of reaction to errors diminished with exposure, possibly indicating acceptance or resignation. Some used strategies similar to those used to communicate with other technologies, such as smart assistants. An anthropomorphic robot may not be ideally suited to this kind of task. Laundrobot's appearance, morphology, voice, capabilities, and recovery strategies may have impacted how it was perceived. Some participants indicated Laundrobot's actual skills were not aligned with expectations; this made it difficult to know what to expect and how much Laundrobot understood. Expertise, personality, and cultural differences may affect responses, however these were not assessed.
Related papers
- Adapting Robot's Explanation for Failures Based on Observed Human Behavior in Human-Robot Collaboration [6.047608758920625]
We analyzed how human behavior changed in response to different types of failures and varying explanation levels.
We formulate a data-driven predictor to predict human confusion during robot failure explanations.
arXiv Detail & Related papers (2025-04-13T20:49:43Z) - Mining multi-modal communication patterns in interaction with
explainable and non-explainable robots [0.138120109831448]
We investigate interaction patterns for humans interacting with explainable and non-explainable robots.
We video recorded and analyzed human behavior during a board game, where 20 humans verbally instructed either an explainable or non-explainable Pepper robot to move objects on the board.
arXiv Detail & Related papers (2023-12-22T12:12:55Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - A Human-Robot Mutual Learning System with Affect-Grounded Language
Acquisition and Differential Outcomes Training [0.1812164955222814]
The paper presents a novel human-robot interaction setup for identifying robot homeostatic needs.
We adopted a differential outcomes training protocol whereby the robot provides feedback specific to its internal needs.
We found evidence that DOT can enhance the human's learning efficiency, which in turn enables more efficient robot language acquisition.
arXiv Detail & Related papers (2023-10-20T09:41:31Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - Examining Audio Communication Mechanisms for Supervising Fleets of
Agricultural Robots [2.76240219662896]
We develop a simulation platform where agbots are deployed across a field, randomly encounter failures, and call for help from the operator.
As the agbots report errors, various audio communication mechanisms are tested to convey which robot failed and what type of failure occurs.
A user study was conducted to test three audio communication methods: earcons, single-phrase commands, and full sentence communication.
arXiv Detail & Related papers (2022-08-22T17:19:20Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Explain yourself! Effects of Explanations in Human-Robot Interaction [10.389325878657697]
Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust.
The effects on human perceptions of robots that explain their decisions have not been studied thoroughly.
This study demonstrates the need for and potential of explainable human-robot interaction.
arXiv Detail & Related papers (2022-04-09T15:54:27Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.