Using High-Level Patterns to Estimate How Humans Predict a Robot will Behave
- URL: http://arxiv.org/abs/2409.13533v1
- Date: Fri, 20 Sep 2024 14:23:05 GMT
- Title: Using High-Level Patterns to Estimate How Humans Predict a Robot will Behave
- Authors: Sagar Parekh, Lauren Bramblett, Nicola Bezzo, Dylan P. Losey,
- Abstract summary: A human interacting with a robot often forms predictions of what the robot will do next.
We develop a second-order theory of mind approach that enables robots to estimate how humans predict they will behave.
- Score: 13.794132035382269
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A human interacting with a robot often forms predictions of what the robot will do next. For instance, based on the recent behavior of an autonomous car, a nearby human driver might predict that the car is going to remain in the same lane. It is important for the robot to understand the human's prediction for safe and seamless interaction: e.g., if the autonomous car knows the human thinks it is not merging -- but the autonomous car actually intends to merge -- then the car can adjust its behavior to prevent an accident. Prior works typically assume that humans make precise predictions of robot behavior. However, recent research on human-human prediction suggests the opposite: humans tend to approximate other agents by predicting their high-level behaviors. We apply this finding to develop a second-order theory of mind approach that enables robots to estimate how humans predict they will behave. To extract these high-level predictions directly from data, we embed the recent human and robot trajectories into a discrete latent space. Each element of this latent space captures a different type of behavior (e.g., merging in front of the human, remaining in the same lane) and decodes into a vector field across the state space that is consistent with the underlying behavior type. We hypothesize that our resulting high-level and course predictions of robot behavior will correspond to actual human predictions. We provide initial evidence in support of this hypothesis through a proof-of-concept user study.
Related papers
- Predicting Human Impressions of Robot Performance During Navigation Tasks [8.01980632893357]
We investigate the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques.
Results suggest that facial expressions alone provide useful information about human impressions of robot performance.
Supervised learning techniques showed promise because they outperformed humans' predictions of robot performance in most cases.
arXiv Detail & Related papers (2023-10-17T21:12:32Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Learning Latent Representations to Co-Adapt to Humans [12.71953776723672]
Non-stationary humans are challenging for robot learners.
In this paper we introduce an algorithmic formalism that enables robots to co-adapt alongside dynamic humans.
arXiv Detail & Related papers (2022-12-19T16:19:24Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures [69.03494351066846]
We investigate two ways to make robots proactive.
One way is to recognize humans' intentions and to act to fulfill them, like opening the door that you are about to cross.
The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them.
arXiv Detail & Related papers (2022-05-11T13:33:14Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Drivers' Manoeuvre Modelling and Prediction for Safe HRI [0.0]
Theory of Mind has been broadly explored for robotics and recently for autonomous and semi-autonomous vehicles.
We explored how to predict human intentions before an action is performed by combining data from human-motion, vehicle-state and human inputs.
arXiv Detail & Related papers (2021-06-03T10:07:55Z) - Dynamically Switching Human Prediction Models for Efficient Planning [32.180808286226075]
We give the robot access to a suite of human models and enable it to assess the performance-computation trade-off online.
Our experiments in a driving simulator showcase how the robot can achieve performance comparable to always using the best human model.
arXiv Detail & Related papers (2021-03-13T23:48:09Z) - Minimizing Robot Navigation-Graph For Position-Based Predictability By
Humans [20.13307800821161]
In situations where humans and robots are moving in the same space whilst performing their own tasks, predictable paths are vital.
The cognitive effort for the human to predict the robot's path becomes untenable as the number of robots increases.
We propose to minimize the navigation-graph of the robot for position-based predictability.
arXiv Detail & Related papers (2020-10-28T22:09:10Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.