It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying
- URL: http://arxiv.org/abs/2209.12890v1
- Date: Mon, 26 Sep 2022 17:59:23 GMT
- Title: It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying
- Authors: Eley Ng, Ziang Liu, Monroe Kennedy III
- Abstract summary: We present a method for predicting realistic motion plans for cooperative human-robot teams on a table-carrying task.
We use a Variational Recurrent Neural Network, VRNN, to model the variation in the trajectory of a human-robot team over time.
We show that the model generates more human-like motion compared to a baseline, centralized sampling-based planner.
- Score: 0.6981715773998527
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Collaborative table-carrying is a complex task due to the continuous nature
of the action and state-spaces, multimodality of strategies, existence of
obstacles in the environment, and the need for instantaneous adaptation to
other agents. In this work, we present a method for predicting realistic motion
plans for cooperative human-robot teams on a table-carrying task. Using a
Variational Recurrent Neural Network, VRNN, to model the variation in the
trajectory of a human-robot team over time, we are able to capture the
distribution over the team's future states while leveraging information from
interaction history. The key to our approach is in our model's ability to
leverage human demonstration data and generate trajectories that synergize well
with humans during test time. We show that the model generates more human-like
motion compared to a baseline, centralized sampling-based planner,
Rapidly-exploring Random Trees (RRT). Furthermore, we evaluate the VRNN planner
with a human partner and show its ability to both generate more human-like
paths and achieve higher task success rate than RRT can while planning with a
human. Finally, we demonstrate that a LoCoBot using the VRNN planner can
complete the task successfully with a human controlling another LoCoBot.
Related papers
- PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks [57.89516354418451]
We present a benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration (PARTNR)
We employ a semi-automated task generation pipeline using Large Language Models (LLMs)
We analyze state-of-the-art LLMs on PARTNR tasks, across the axes of planning, perception and skill execution.
arXiv Detail & Related papers (2024-10-31T17:53:12Z) - Grounding Language Models in Autonomous Loco-manipulation Tasks [3.8363685417355557]
We propose a novel framework that learns, selects, and plans behaviors based on tasks in different scenarios.
We leverage the planning and reasoning features of the large language model (LLM), constructing a hierarchical task graph.
Experiments in simulation and real-world using the CENTAURO robot show that the language model based planner can efficiently adapt to new loco-manipulation tasks.
arXiv Detail & Related papers (2024-09-02T15:27:48Z) - Multi-Agent Dynamic Relational Reasoning for Social Robot Navigation [50.01551945190676]
Social robot navigation can be helpful in various contexts of daily life but requires safe human-robot interactions and efficient trajectory planning.
We propose a systematic relational reasoning approach with explicit inference of the underlying dynamically evolving relational structures.
We demonstrate its effectiveness for multi-agent trajectory prediction and social robot navigation.
arXiv Detail & Related papers (2024-01-22T18:58:22Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Achieving mouse-level strategic evasion performance using real-time
computational planning [59.60094442546867]
Planning is an extraordinary ability in which the brain imagines and then enacts evaluated possible futures.
We develop a more efficient biologically-inspired planning algorithm, TLPPO, based on work on how the ecology of an animal governs the value of spatial planning.
We compare the performance of a real-time agent using TLPPO against the performance of live mice, all tasked with evading a robot predator.
arXiv Detail & Related papers (2022-11-04T18:34:36Z) - MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot
Interaction [34.978017200500005]
We propose Multimodal Interactive Latent Dynamics (MILD) to address the problem of two-party physical Human-Robot Interactions (HRIs)
We learn the interaction dynamics from demonstrations, using Hidden Semi-Markov Models (HSMMs) to model the joint distribution of the interacting agents in the latent space of a Variational Autoencoder (VAE)
MILD generates more accurate trajectories for the controlled agent (robot) when conditioned on the observed agent's (human) trajectory.
arXiv Detail & Related papers (2022-10-22T11:25:11Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Learning Interaction-Aware Trajectory Predictions for Decentralized
Multi-Robot Motion Planning in Dynamic Environments [10.345048137438623]
We introduce a novel trajectory prediction model based on recurrent neural networks (RNN)
We then incorporate the trajectory prediction model into a decentralized model predictive control (MPC) framework for multi-robot collision avoidance.
arXiv Detail & Related papers (2021-02-10T11:11:08Z) - Leveraging Neural Network Gradients within Trajectory Optimization for
Proactive Human-Robot Interactions [32.57882479132015]
We present a framework that fuses together the interpretability and flexibility of trajectory optimization (TO) with the predictive power of state-of-the-art human trajectory prediction models.
We demonstrate the efficacy of our approach in a multi-agent scenario whereby a robot is required to safely and efficiently navigate through a crowd of up to ten pedestrians.
arXiv Detail & Related papers (2020-12-02T08:43:36Z) - Modeling Human Temporal Uncertainty in Human-Agent Teams [0.0]
This paper builds a model of human timing uncertainty from a population of crowd-workers.
We conclude that heavy-tailed distributions are the best models of human temporal uncertainty.
We discuss how these results along with our collaborative online game will inform and facilitate future explorations into scheduling for improved human-robot fluency.
arXiv Detail & Related papers (2020-10-09T23:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.