Trust-Aware Planning: Modeling Trust Evolution in Longitudinal
Human-Robot Interaction
- URL: http://arxiv.org/abs/2105.01220v1
- Date: Mon, 3 May 2021 23:38:34 GMT
- Title: Trust-Aware Planning: Modeling Trust Evolution in Longitudinal
Human-Robot Interaction
- Authors: Zahra Zahedi, Mudit Verma, Sarath Sreedharan, Subbarao Kambhampati
- Abstract summary: We propose a computational model for capturing and modulating trust in longitudinal human-robot interaction.
In our model, the robot integrates human's trust and their expectations from the robot into its planning process to build and maintain trust over the interaction horizon.
- Score: 21.884895329834112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trust between team members is an essential requirement for any successful
cooperation. Thus, engendering and maintaining the fellow team members' trust
becomes a central responsibility for any member trying to not only successfully
participate in the task but to ensure the team achieves its goals. The problem
of trust management is particularly challenging in mixed human-robot teams
where the human and the robot may have different models about the task at hand
and thus may have different expectations regarding the current course of action
and forcing the robot to focus on the costly explicable behavior. We propose a
computational model for capturing and modulating trust in such longitudinal
human-robot interaction, where the human adopts a supervisory role. In our
model, the robot integrates human's trust and their expectations from the robot
into its planning process to build and maintain trust over the interaction
horizon. By establishing the required level of trust, the robot can focus on
maximizing the team goal by eschewing explicit explanatory or explicable
behavior without worrying about the human supervisor monitoring and intervening
to stop behaviors they may not necessarily understand. We model this reasoning
about trust levels as a meta reasoning process over individual planning tasks.
We additionally validate our model through a human subject experiment.
Related papers
- An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions [8.309981857034902]
The aim is to build a robot policy that accounts for uncontrollable human behaviors.
We propose a novel planning framework and build a solver based on AND-OR search.
Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-09-27T08:27:36Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - ToP-ToM: Trust-aware Robot Policy with Theory of Mind [3.4850414292716327]
Theory of Mind (ToM) is a cognitive architecture that endows humans with the ability to attribute mental states to others.
This paper investigates trust-aware robot policy with the theory of mind in a multiagent setting.
arXiv Detail & Related papers (2023-11-07T23:55:56Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Robust Planning for Human-Robot Joint Tasks with Explicit Reasoning on
Human Mental State [2.8246074016493457]
We consider the human-aware task planning problem where a human-robot team is given a shared task with a known objective to achieve.
Recent approaches tackle it by modeling it as a team of independent, rational agents, where the robot plans for both agents' (shared) tasks.
We describe a novel approach to solve such problems, which models and uses execution-time observability conventions.
arXiv Detail & Related papers (2022-10-17T09:21:00Z) - Evaluation of Performance-Trust vs Moral-Trust Violation in 3D
Environment [1.4502611532302039]
We aim to design an experiment to investigate the consequences of performance-trust violation and moral-trust violation in a search and rescue scenario.
We want to see if two similar robot failures, one caused by a performance-trust violation and the other by a moral-trust violation have distinct effects on human trust.
arXiv Detail & Related papers (2022-06-30T17:27:09Z) - Trust as Extended Control: Active Inference and User Feedback During
Human-Robot Collaboration [2.6381163133447836]
Despite its crucial role, it is largely unknown how trust emerges, develops, and supports human interactions with nonhuman artefacts.
We introduce a model of trust as an agent's best explanation for reliable sensory exchange with an extended motor plant or partner.
We examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration.
arXiv Detail & Related papers (2021-04-22T16:11:22Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.