ToP-ToM: Trust-aware Robot Policy with Theory of Mind
- URL: http://arxiv.org/abs/2311.04397v1
- Date: Tue, 7 Nov 2023 23:55:56 GMT
- Title: ToP-ToM: Trust-aware Robot Policy with Theory of Mind
- Authors: Chuang Yu, Baris Serhan and Angelo Cangelosi
- Abstract summary: Theory of Mind (ToM) is a cognitive architecture that endows humans with the ability to attribute mental states to others.
This paper investigates trust-aware robot policy with the theory of mind in a multiagent setting.
- Score: 3.4850414292716327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Theory of Mind (ToM) is a fundamental cognitive architecture that endows
humans with the ability to attribute mental states to others. Humans infer the
desires, beliefs, and intentions of others by observing their behavior and, in
turn, adjust their actions to facilitate better interpersonal communication and
team collaboration. In this paper, we investigated trust-aware robot policy
with the theory of mind in a multiagent setting where a human collaborates with
a robot against another human opponent. We show that by only focusing on team
performance, the robot may resort to the reverse psychology trick, which poses
a significant threat to trust maintenance. The human's trust in the robot will
collapse when they discover deceptive behavior by the robot. To mitigate this
problem, we adopt the robot theory of mind model to infer the human's trust
beliefs, including true belief and false belief (an essential element of ToM).
We designed a dynamic trust-aware reward function based on different trust
beliefs to guide the robot policy learning, which aims to balance between
avoiding human trust collapse due to robot reverse psychology. The experimental
results demonstrate the importance of the ToM-based robot policy for
human-robot trust and the effectiveness of our robot ToM-based robot policy in
multiagent interaction settings.
Related papers
- HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - "Do it my way!": Impact of Customizations on Trust perceptions in
Human-Robot Collaboration [0.8287206589886881]
Personalization of assistive robots is positively correlated with robot adoption and user perceptions.
Our findings indicate that increased levels of customization was associated with higher trust and comfort perceptions.
arXiv Detail & Related papers (2023-10-28T19:31:40Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - The dynamic nature of trust: Trust in Human-Robot Interaction revisited [0.38233569758620045]
Socially assistive robots (SARs) assist humans in the real world.
Risk introduces an element of trust, so understanding human trust in the robot is imperative.
arXiv Detail & Related papers (2023-03-08T19:20:11Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Evaluation of Performance-Trust vs Moral-Trust Violation in 3D
Environment [1.4502611532302039]
We aim to design an experiment to investigate the consequences of performance-trust violation and moral-trust violation in a search and rescue scenario.
We want to see if two similar robot failures, one caused by a performance-trust violation and the other by a moral-trust violation have distinct effects on human trust.
arXiv Detail & Related papers (2022-06-30T17:27:09Z) - Moral-Trust Violation vs Performance-Trust Violation by a Robot: Which
Hurts More? [0.7373617024876725]
We study the effects of performance-trust violation and moral-trust violation separately in a search and rescue task.
We want to see whether two failures of a robot with equal magnitudes would affect human trust differently if one failure is due to a performance-trust violation and the other is a moral-trust violation.
arXiv Detail & Related papers (2021-10-09T00:32:18Z) - Trust-Aware Planning: Modeling Trust Evolution in Longitudinal
Human-Robot Interaction [21.884895329834112]
We propose a computational model for capturing and modulating trust in longitudinal human-robot interaction.
In our model, the robot integrates human's trust and their expectations from the robot into its planning process to build and maintain trust over the interaction horizon.
arXiv Detail & Related papers (2021-05-03T23:38:34Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.