Model Elicitation through Direct Questioning
- URL: http://arxiv.org/abs/2011.12262v1
- Date: Tue, 24 Nov 2020 18:17:16 GMT
- Title: Model Elicitation through Direct Questioning
- Authors: Sachin Grover, David Smith, Subbarao Kambhampati
- Abstract summary: We show how a robot can interact to localize the human model from a set of models.
We show how to generate questions to refine the robot's understanding of the teammate's model.
- Score: 22.907680615911755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The future will be replete with scenarios where humans are robots will be
working together in complex environments. Teammates interact, and the robot's
interaction has to be about getting useful information about the human's
(teammate's) model. There are many challenges before a robot can interact, such
as incorporating the structural differences in the human's model, ensuring
simpler responses, etc. In this paper, we investigate how a robot can interact
to localize the human model from a set of models. We show how to generate
questions to refine the robot's understanding of the teammate's model. We
evaluate the method in various planning domains. The evaluation shows that
these questions can be generated offline, and can help refine the model through
simple answers.
Related papers
- Singing the Body Electric: The Impact of Robot Embodiment on User
Expectations [7.408858358967414]
Users develop mental models of robots to conceptualize what kind of interactions they can have with those robots.
conceptualizations are often formed before interactions with the robot and are based only on observing the robot's physical design.
We propose to use multimodal features of robot embodiments to predict what kinds of expectations users will have about a given robot's social and physical capabilities.
arXiv Detail & Related papers (2024-01-13T04:42:48Z) - Towards Generalizable Zero-Shot Manipulation via Translating Human
Interaction Plans [58.27029676638521]
We show how passive human videos can serve as a rich source of data for learning such generalist robots.
We learn a human plan predictor that, given a current image of a scene and a goal image, predicts the future hand and object configurations.
We show that our learned system can perform over 16 manipulation skills that generalize to 40 objects.
arXiv Detail & Related papers (2023-12-01T18:54:12Z) - InteRACT: Transformer Models for Human Intent Prediction Conditioned on Robot Actions [7.574421886354134]
InteRACT architecture pre-trains a conditional intent prediction model on large human-human datasets and fine-tunes on a small human-robot dataset.
We evaluate on a set of real-world collaborative human-robot manipulation tasks and show that our conditional model improves over various marginal baselines.
arXiv Detail & Related papers (2023-11-21T19:15:17Z) - Affordances from Human Videos as a Versatile Representation for Robotics [31.248842798600606]
We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
arXiv Detail & Related papers (2023-04-17T17:59:34Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - On Model Reconciliation: How to Reconcile When Robot Does not Know
Human's Model? [0.0]
The Model Reconciliation Problem (MRP) was introduced to address issues in explainable AI planning.
Most approaches to solving MRPs assume that the robot, who needs to provide explanations, knows the human model.
We propose a dialog-based approach for computing explanations of MRPs under the assumptions that the robot does not know the human model.
arXiv Detail & Related papers (2022-08-05T10:48:42Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Analyzing Human Models that Adapt Online [42.90591111619058]
Predictive human models often need to adapt their parameters online from human data.
This raises previously ignored safety-related questions for robots relying on these models.
We model the robot's learning algorithm as a dynamical system where the state is the current model parameter estimate and the control is the human data the robot observes.
arXiv Detail & Related papers (2021-03-09T22:38:46Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.