Training Models to Detect Successive Robot Errors from Human Reactions
- URL: http://arxiv.org/abs/2510.09080v1
- Date: Fri, 10 Oct 2025 07:25:44 GMT
- Title: Training Models to Detect Successive Robot Errors from Human Reactions
- Authors: Shannon Liu, Maria Teresa Parreira, Wendy Ju,
- Abstract summary: This research uses machine learning to recognize stages of robot failure from human reactions.<n>In a study with 26 participants interacting with a robot that made repeated conversational errors, behavioral features were extracted from video data to train models.<n>The best model achieved 93.5% accuracy for detecting errors and 84.1% for classifying successive failures.
- Score: 11.790205457987488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As robots become more integrated into society, detecting robot errors is essential for effective human-robot interaction (HRI). When a robot fails repeatedly, how can it know when to change its behavior? Humans naturally respond to robot errors through verbal and nonverbal cues that intensify over successive failures-from confusion and subtle speech changes to visible frustration and impatience. While prior work shows that human reactions can indicate robot failures, few studies examine how these evolving responses reveal successive failures. This research uses machine learning to recognize stages of robot failure from human reactions. In a study with 26 participants interacting with a robot that made repeated conversational errors, behavioral features were extracted from video data to train models for individual users. The best model achieved 93.5% accuracy for detecting errors and 84.1% for classifying successive failures. Modeling the progression of human reactions enhances error detection and understanding of repeated interaction breakdowns in HRI.
Related papers
- ERR@HRI 2.0 Challenge: Multimodal Detection of Errors and Failures in Human-Robot Conversations [18.151307410451796]
ERR@HRI 2.0 Challenge provides a dataset of conversational robot failures during human-robot conversations.<n> dataset includes 16 hours of dyadic human-robot interactions, incorporating facial, speech, and head movement features.<n>Participants are invited to form teams and develop machine learning models that detect these failures using multimodal data.
arXiv Detail & Related papers (2025-07-17T18:21:45Z) - Why Robots Are Bad at Detecting Their Mistakes: Limitations of Miscommunication Detection in Human-Robot Dialogue [0.6118899177909359]
This research evaluates the effectiveness of machine learning models in detecting miscommunications in robot dialogue.<n>After each conversational turn, users provided feedback on whether they perceived an error, enabling an analysis of the models' ability to accurately detect robot mistakes.
arXiv Detail & Related papers (2025-06-25T09:25:04Z) - Human strategies for correcting `human-robot' errors during a laundry sorting task [3.9697512504288373]
Video analysis from 42 participants found speech patterns, including laughter, verbal expressions, and filler words, such as oh'' and ok''<n>Common strategies deployed when errors occurred, included correcting and teaching, taking responsibility, and displays of frustration.<n>An anthropomorphic robot may not be ideally suited to this kind of task.
arXiv Detail & Related papers (2025-04-11T09:53:36Z) - Human-Robot Interaction and Perceived Irrationality: A Study of Trust Dynamics and Error Acknowledgment [0.0]
This study systematically examines trust dynamics and system design by analyzing human reactions to robot failures.<n>We conducted a four-stage survey to explore how trust evolves throughout human-robot interactions.<n>Results indicate that trust in robotic systems significantly increased when robots acknowledged their errors or limitations.
arXiv Detail & Related papers (2024-03-21T11:00:11Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Learning Latent Representations to Co-Adapt to Humans [12.71953776723672]
Non-stationary humans are challenging for robot learners.
In this paper we introduce an algorithmic formalism that enables robots to co-adapt alongside dynamic humans.
arXiv Detail & Related papers (2022-12-19T16:19:24Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.