Towards Inferring Users' Impressions of Robot Performance in Navigation
Scenarios
- URL: http://arxiv.org/abs/2310.11590v1
- Date: Tue, 17 Oct 2023 21:12:32 GMT
- Title: Towards Inferring Users' Impressions of Robot Performance in Navigation
Scenarios
- Authors: Qiping Zhang, Nathan Tsoi, Booyeon Choi, Jie Tan, Hao-Tien Lewis
Chiang, Marynel V\'azquez
- Abstract summary: We study the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques.
Results show that facial expressions alone provide useful information about human impressions of robot performance.
We provide guidelines for implementing these predictions models in real-world navigation scenarios.
- Score: 7.657890824144234
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human impressions of robot performance are often measured through surveys. As
a more scalable and cost-effective alternative, we study the possibility of
predicting people's impressions of robot behavior using non-verbal behavioral
cues and machine learning techniques. To this end, we first contribute the SEAN
TOGETHER Dataset consisting of observations of an interaction between a person
and a mobile robot in a Virtual Reality simulation, together with impressions
of robot performance provided by users on a 5-point scale. Second, we
contribute analyses of how well humans and supervised learning techniques can
predict perceived robot performance based on different combinations of
observation types (e.g., facial, spatial, and map features). Our results show
that facial expressions alone provide useful information about human
impressions of robot performance; but in the navigation scenarios we tested,
spatial features are the most critical piece of information for this inference
task. Also, when evaluating results as binary classification (rather than
multiclass classification), the F1-Score of human predictions and machine
learning models more than doubles, showing that both are better at telling the
directionality of robot performance than predicting exact performance ratings.
Based on our findings, we provide guidelines for implementing these predictions
models in real-world navigation scenarios.
Related papers
- Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - Exploring Visual Pre-training for Robot Manipulation: Datasets, Models
and Methods [14.780597545674157]
We investigate the effects of visual pre-training strategies on robot manipulation tasks from three fundamental perspectives.
We propose a visual pre-training scheme for robot manipulation termed Vi-PRoM, which combines self-supervised learning and supervised learning.
arXiv Detail & Related papers (2023-08-07T14:24:52Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Few-Shot Visual Grounding for Natural Human-Robot Interaction [0.0]
We propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user.
At the core of our system, we employ a multi-modal deep neural network for visual grounding.
We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets.
arXiv Detail & Related papers (2021-03-17T15:24:02Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.