Generating Active Explicable Plans in Human-Robot Teaming
- URL: http://arxiv.org/abs/2109.08834v1
- Date: Sat, 18 Sep 2021 05:05:50 GMT
- Title: Generating Active Explicable Plans in Human-Robot Teaming
- Authors: Akkamahadevi Hanni and Yu Zhang
- Abstract summary: It is important for robots to behave explicably by meeting the human's expectations.
Existing approaches to generating explicable plans often assume that the human's expectations are known and static.
We apply a Bayesian approach to model and predict dynamic human belief and expectations to make explicable planning more anticipatory.
- Score: 4.657875410615595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent robots are redefining a multitude of critical domains but are
still far from being fully capable of assisting human peers in day-to-day
tasks. An important requirement of collaboration is for each teammate to
maintain and respect an understanding of the others' expectations of itself.
Lack of which may lead to serious issues such as loose coordination between
teammates, reduced situation awareness, and ultimately teaming failures. Hence,
it is important for robots to behave explicably by meeting the human's
expectations. One of the challenges here is that the expectations of the human
are often hidden and can change dynamically as the human interacts with the
robot. However, existing approaches to generating explicable plans often assume
that the human's expectations are known and static. In this paper, we propose
the idea of active explicable planning to relax this assumption. We apply a
Bayesian approach to model and predict dynamic human belief and expectations to
make explicable planning more anticipatory. We hypothesize that active
explicable plans can be more efficient and explicable at the same time, when
compared to explicable plans generated by the existing methods. In our
experimental evaluation, we verify that our approach generates more efficient
explicable plans while successfully capturing the dynamic belief change of the
human teammate.
Related papers
- An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions [8.309981857034902]
The aim is to build a robot policy that accounts for uncontrollable human behaviors.
We propose a novel planning framework and build a solver based on AND-OR search.
Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-09-27T08:27:36Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Learning Vision-based Pursuit-Evasion Robot Policies [54.52536214251999]
We develop a fully-observable robot policy that generates supervision for a partially-observable one.
We deploy our policy on a physical quadruped robot with an RGB-D camera on pursuit-evasion interactions in the wild.
arXiv Detail & Related papers (2023-08-30T17:59:05Z) - RoboPianist: Dexterous Piano Playing with Deep Reinforcement Learning [61.10744686260994]
We introduce RoboPianist, a system that enables simulated anthropomorphic hands to learn an extensive repertoire of 150 piano pieces.
We additionally introduce an open-sourced environment, benchmark of tasks, interpretable evaluation metrics, and open challenges for future study.
arXiv Detail & Related papers (2023-04-09T03:53:05Z) - Robust Robot Planning for Human-Robot Collaboration [11.609195090422514]
In human-robot collaboration, the objectives of the human are often unknown to the robot.
We propose an approach to automatically generate an uncertain human behavior (a policy) for each given objective function.
We also propose a robot planning algorithm that is robust to the above-mentioned uncertainties.
arXiv Detail & Related papers (2023-02-27T16:02:48Z) - Robust Planning for Human-Robot Joint Tasks with Explicit Reasoning on
Human Mental State [2.8246074016493457]
We consider the human-aware task planning problem where a human-robot team is given a shared task with a known objective to achieve.
Recent approaches tackle it by modeling it as a team of independent, rational agents, where the robot plans for both agents' (shared) tasks.
We describe a novel approach to solve such problems, which models and uses execution-time observability conventions.
arXiv Detail & Related papers (2022-10-17T09:21:00Z) - AAAI SSS-22 Symposium on Closing the Assessment Loop: Communicating
Proficiency and Intent in Human-Robot Teaming [4.787322716745613]
How should a robot convey predicted ability on a new task?
How should a robot adapt its proficiency criteria based on human intentions and values?
There are no agreed upon standards for evaluating proficiency and intent-based interactions.
arXiv Detail & Related papers (2022-04-05T18:28:01Z) - Trust-Aware Planning: Modeling Trust Evolution in Longitudinal
Human-Robot Interaction [21.884895329834112]
We propose a computational model for capturing and modulating trust in longitudinal human-robot interaction.
In our model, the robot integrates human's trust and their expectations from the robot into its planning process to build and maintain trust over the interaction horizon.
arXiv Detail & Related papers (2021-05-03T23:38:34Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.