Adapting Robot's Explanation for Failures Based on Observed Human Behavior in Human-Robot Collaboration
- URL: http://arxiv.org/abs/2504.09717v1
- Date: Sun, 13 Apr 2025 20:49:43 GMT
- Title: Adapting Robot's Explanation for Failures Based on Observed Human Behavior in Human-Robot Collaboration
- Authors: Andreas Naoum, Parag Khanna, Elmira Yadollahi, Mårten Björkman, Christian Smith,
- Abstract summary: We analyzed how human behavior changed in response to different types of failures and varying explanation levels.<n>We formulate a data-driven predictor to predict human confusion during robot failure explanations.
- Score: 6.047608758920625
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This work aims to interpret human behavior to anticipate potential user confusion when a robot provides explanations for failure, allowing the robot to adapt its explanations for more natural and efficient collaboration. Using a dataset that included facial emotion detection, eye gaze estimation, and gestures from 55 participants in a user study, we analyzed how human behavior changed in response to different types of failures and varying explanation levels. Our goal is to assess whether human collaborators are ready to accept less detailed explanations without inducing confusion. We formulate a data-driven predictor to predict human confusion during robot failure explanations. We also propose and evaluate a mechanism, based on the predictor, to adapt the explanation level according to observed human behavior. The promising results from this evaluation indicate the potential of this research in adapting a robot's explanations for failures to enhance the collaborative experience.
Related papers
- An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions [8.309981857034902]
The aim is to build a robot policy that accounts for uncontrollable human behaviors.
We propose a novel planning framework and build a solver based on AND-OR search.
Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-09-27T08:27:36Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Robust Robot Planning for Human-Robot Collaboration [11.609195090422514]
In human-robot collaboration, the objectives of the human are often unknown to the robot.
We propose an approach to automatically generate an uncertain human behavior (a policy) for each given objective function.
We also propose a robot planning algorithm that is robust to the above-mentioned uncertainties.
arXiv Detail & Related papers (2023-02-27T16:02:48Z) - Introspection-based Explainable Reinforcement Learning in Episodic and
Non-episodic Scenarios [14.863872352905629]
introspection-based approach can be used in conjunction with reinforcement learning agents to provide probabilities of success.
Introspection-based approach can be used to generate explanations for the actions taken in a non-episodic robotics environment as well.
arXiv Detail & Related papers (2022-11-23T13:05:52Z) - Causal Discovery of Dynamic Models for Predicting Human Spatial
Interactions [5.742409080817885]
We propose an application of causal discovery methods to model human-robot spatial interactions.
New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm.
arXiv Detail & Related papers (2022-10-29T08:56:48Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Explain yourself! Effects of Explanations in Human-Robot Interaction [10.389325878657697]
Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust.
The effects on human perceptions of robots that explain their decisions have not been studied thoroughly.
This study demonstrates the need for and potential of explainable human-robot interaction.
arXiv Detail & Related papers (2022-04-09T15:54:27Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.