On Model Reconciliation: How to Reconcile When Robot Does not Know
Human's Model?
- URL: http://arxiv.org/abs/2208.03091v1
- Date: Fri, 5 Aug 2022 10:48:42 GMT
- Title: On Model Reconciliation: How to Reconcile When Robot Does not Know
Human's Model?
- Authors: Ho Tuan Dung (Department of Computer Science, New Mexico State
University, Las Cruces, USA), Tran Cao Son (Department of Computer Science,
New Mexico State University, Las Cruces, USA)
- Abstract summary: The Model Reconciliation Problem (MRP) was introduced to address issues in explainable AI planning.
Most approaches to solving MRPs assume that the robot, who needs to provide explanations, knows the human model.
We propose a dialog-based approach for computing explanations of MRPs under the assumptions that the robot does not know the human model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Model Reconciliation Problem (MRP) was introduced to address issues in
explainable AI planning. A solution to a MRP is an explanation for the
differences between the models of the human and the planning agent (robot).
Most approaches to solving MRPs assume that the robot, who needs to provide
explanations, knows the human model. This assumption is not always realistic in
several situations (e.g., the human might decide to update her model and the
robot is unaware of the updates).
In this paper, we propose a dialog-based approach for computing explanations
of MRPs under the assumptions that (i) the robot does not know the human model;
(ii) the human and the robot share the set of predicates of the planning domain
and their exchanges are about action descriptions and fluents' values; (iii)
communication between the parties is perfect; and (iv) the parties are
truthful. A solution of a MRP is computed through a dialog, defined as a
sequence of rounds of exchanges, between the robot and the human. In each
round, the robot sends a potential explanation, called proposal, to the human
who replies with her evaluation of the proposal, called response. We develop
algorithms for computing proposals by the robot and responses by the human and
implement these algorithms in a system that combines imperative means with
answer set programming using the multi-shot feature of clingo.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Automated Process Planning Based on a Semantic Capability Model and SMT [50.76251195257306]
In research of manufacturing systems and autonomous robots, the term capability is used for a machine-interpretable specification of a system function.
We present an approach that combines these two topics: starting from a semantic capability model, an AI planning problem is automatically generated.
arXiv Detail & Related papers (2023-12-14T10:37:34Z) - InteRACT: Transformer Models for Human Intent Prediction Conditioned on Robot Actions [7.574421886354134]
InteRACT architecture pre-trains a conditional intent prediction model on large human-human datasets and fine-tunes on a small human-robot dataset.
We evaluate on a set of real-world collaborative human-robot manipulation tasks and show that our conditional model improves over various marginal baselines.
arXiv Detail & Related papers (2023-11-21T19:15:17Z) - SOCRATES: Text-based Human Search and Approach using a Robot Dog [6.168521568443759]
We propose a SOCratic model for Robots Approaching humans based on TExt System (SOCRATES)
We first present a Human Search Socratic Model that connects large pre-trained models in the language domain to solve the downstream task.
Then, we propose a hybrid learning-based framework for generating target-cordial robotic motion to approach a person.
arXiv Detail & Related papers (2023-02-10T15:35:24Z) - Continuous ErrP detections during multimodal human-robot interaction [2.5199066832791535]
We implement a multimodal human-robot interaction (HRI) scenario, in which a simulated robot communicates with its human partner through speech and gestures.
The human partner, in turn, evaluates whether the robot's verbal announcement (intention) matches the action (pointing gesture) chosen by the robot.
In intrinsic evaluations of robot actions by humans, evident in the EEG, were recorded in real time, continuously segmented online and classified asynchronously.
arXiv Detail & Related papers (2022-07-25T15:39:32Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - Dynamically Switching Human Prediction Models for Efficient Planning [32.180808286226075]
We give the robot access to a suite of human models and enable it to assess the performance-computation trade-off online.
Our experiments in a driving simulator showcase how the robot can achieve performance comparable to always using the best human model.
arXiv Detail & Related papers (2021-03-13T23:48:09Z) - Model Elicitation through Direct Questioning [22.907680615911755]
We show how a robot can interact to localize the human model from a set of models.
We show how to generate questions to refine the robot's understanding of the teammate's model.
arXiv Detail & Related papers (2020-11-24T18:17:16Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.