Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks
- URL: http://arxiv.org/abs/2007.12803v1
- Date: Fri, 24 Jul 2020 23:35:03 GMT
- Title: Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks
- Authors: Xiaofeng Gao, Ran Gong, Yizhou Zhao, Shu Wang, Tianmin Shu, Song-Chun
Zhu
- Abstract summary: We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
- Score: 83.37025218216888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human collaborators can effectively communicate with their partners to finish
a common task by inferring each other's mental states (e.g., goals, beliefs,
and desires). Such mind-aware communication minimizes the discrepancy among
collaborators' mental states, and is crucial to the success in human ad-hoc
teaming. We believe that robots collaborating with human users should
demonstrate similar pedagogic behavior. Thus, in this paper, we propose a novel
explainable AI (XAI) framework for achieving human-like communication in
human-robot collaborations, where the robot builds a hierarchical mind model of
the human user and generates explanations of its own mind as a form of
communications based on its online Bayesian inference of the user's mental
state. To evaluate our framework, we conduct a user study on a real-time
human-robot cooking task. Experimental results show that the generated
explanations of our approach significantly improves the collaboration
performance and user perception of the robot. Code and video demos are
available on our project website: https://xfgao.github.io/xCookingWeb/.
Related papers
- HARMONIC: Cognitive and Control Collaboration in Human-Robotic Teams [0.0]
We demonstrate a cognitive strategy for robots in human-robot teams that incorporates metacognition, natural language communication, and explainability.
The system is embodied using the HARMONIC architecture that flexibly integrates cognitive and control capabilities.
arXiv Detail & Related papers (2024-09-26T16:48:21Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - CASPER: Cognitive Architecture for Social Perception and Engagement in
Robots [0.5918643136095765]
We present CASPER: a symbolic cognitive architecture that uses qualitative spatial reasoning to anticipate the pursued goal of another agent and to calculate the best collaborative behavior.
We have tested this architecture in a simulated kitchen environment and the results we have collected show that the robot is able to both recognize an ongoing goal and to properly collaborate towards its achievement.
arXiv Detail & Related papers (2022-09-01T10:15:03Z) - Body Gesture Recognition to Control a Social Robot [5.557794184787908]
We propose a gesture based language to allow humans to interact with robots using their body in a natural way.
We have created a new gesture detection model using neural networks and a custom dataset of humans performing a set of body gestures to train our network.
arXiv Detail & Related papers (2022-06-15T13:49:22Z) - MindCraft: Theory of Mind Modeling for Situated Dialogue in
Collaborative Tasks [2.5725755841426623]
Theory of mind plays an important role in maintaining common ground during human collaboration and communication.
We introduce a fine-grained dataset of collaborative tasks performed by pairs of human subjects in the 3D virtual blocks world of Minecraft.
It provides information that captures partners' beliefs of the world and of each other as an interaction unfolds.
arXiv Detail & Related papers (2021-09-13T19:26:19Z) - Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration [51.268988527778276]
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations.
Our method co-optimizes a human policy and a robot policy in an interactive learning process.
arXiv Detail & Related papers (2021-08-13T03:14:43Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.