Natural Language Interaction to Facilitate Mental Models of Remote
Robots
- URL: http://arxiv.org/abs/2003.05870v1
- Date: Thu, 12 Mar 2020 16:03:27 GMT
- Title: Natural Language Interaction to Facilitate Mental Models of Remote
Robots
- Authors: Francisco J. Chiyah Garcia, Jos\'e Lopes, Helen Hastie
- Abstract summary: High-stakes scenarios require robot operators to have clear mental models of what the robots can and can't do.
We propose that interaction with a conversational assistant, who acts as a mediator, can help the user with understanding the functionality of remote robots.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasingly complex and autonomous robots are being deployed in real-world
environments with far-reaching consequences. High-stakes scenarios, such as
emergency response or offshore energy platform and nuclear inspections, require
robot operators to have clear mental models of what the robots can and can't
do. However, operators are often not the original designers of the robots and
thus, they do not necessarily have such clear mental models, especially if they
are novice users. This lack of mental model clarity can slow adoption and can
negatively impact human-machine teaming. We propose that interaction with a
conversational assistant, who acts as a mediator, can help the user with
understanding the functionality of remote robots and increase transparency
through natural language explanations, as well as facilitate the evaluation of
operators' mental models.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - LLM Granularity for On-the-Fly Robot Control [3.5015824313818578]
In circumstances where visuals become unreliable or unavailable, can we rely solely on language to control robots?
This work takes the initial steps to answer this question by: 1) evaluating the responses of assistive robots to language prompts of varying granularities; and 2) exploring the necessity and feasibility of controlling the robot on-the-fly.
arXiv Detail & Related papers (2024-06-20T18:17:48Z) - Singing the Body Electric: The Impact of Robot Embodiment on User
Expectations [7.408858358967414]
Users develop mental models of robots to conceptualize what kind of interactions they can have with those robots.
conceptualizations are often formed before interactions with the robot and are based only on observing the robot's physical design.
We propose to use multimodal features of robot embodiments to predict what kinds of expectations users will have about a given robot's social and physical capabilities.
arXiv Detail & Related papers (2024-01-13T04:42:48Z) - Exploring Large Language Models to Facilitate Variable Autonomy for Human-Robot Teaming [4.779196219827508]
We introduce a novel framework for a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting.
This system allows users to interact with robot agents through natural language, each powered by individual GPT cores.
A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a multi-robot environment.
arXiv Detail & Related papers (2023-12-12T12:26:48Z) - What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Technical Opinion: From Animal Behaviour to Autonomous Robots [1.0660480034605242]
This paper presents a review on robot autonomy from the perspective of animal behaviour.
It examines some state-of-the-art techniques as well as suggesting possible research directions.
arXiv Detail & Related papers (2020-12-11T16:57:28Z) - Integrating Intrinsic and Extrinsic Explainability: The Relevance of
Understanding Neural Networks for Human-Robot Interaction [19.844084722919764]
Explainable artificial intelligence (XAI) can help foster trust in and acceptance of intelligent and autonomous systems.
NICO, an open-source humanoid robot platform, is introduced and how the interaction of intrinsic explanations by the robot itself and extrinsic explanations provided by the environment enable efficient robotic behavior.
arXiv Detail & Related papers (2020-10-09T14:28:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.