Model Predictive Control for Fluid Human-to-Robot Handovers
- URL: http://arxiv.org/abs/2204.00134v1
- Date: Thu, 31 Mar 2022 23:08:20 GMT
- Title: Model Predictive Control for Fluid Human-to-Robot Handovers
- Authors: Wei Yang, Balakumar Sundaralingam, Chris Paxton, Iretiayo Akinola,
Yu-Wei Chao, Maya Cakmak, Dieter Fox
- Abstract summary: Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
- Score: 50.72520769938633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-robot handover is a fundamental yet challenging task in human-robot
interaction and collaboration. Recently, remarkable progressions have been made
in human-to-robot handovers of unknown objects by using learning-based grasp
generators. However, how to responsively generate smooth motions to take an
object from a human is still an open question. Specifically, planning motions
that take human comfort into account is not a part of the human-robot handover
process in most prior works. In this paper, we propose to generate smooth
motions via an efficient model-predictive control (MPC) framework that
integrates perception and complex domain-specific constraints into the
optimization problem. We introduce a learning-based grasp reachability model to
select candidate grasps which maximize the robot's manipulability, giving it
more freedom to satisfy these constraints. Finally, we integrate a neural net
force/torque classifier that detects contact events from noisy data. We
conducted human-to-robot handover experiments on a diverse set of objects with
several users (N=4) and performed a systematic evaluation of each module. The
study shows that the users preferred our MPC approach over the baseline system
by a large margin. More results and videos are available at
https://sites.google.com/nvidia.com/mpc-for-handover.
Related papers
- Hand-Object Interaction Pretraining from Videos [77.92637809322231]
We learn general robot manipulation priors from 3D hand-object interaction trajectories.
We do so by sharing both the human hand and the manipulated object in 3D space and human motions to robot actions.
We empirically demonstrate that finetuning this policy, with both reinforcement learning (RL) and behavior cloning (BC), enables sample-efficient adaptation to downstream tasks and simultaneously improves robustness and generalizability compared to prior approaches.
arXiv Detail & Related papers (2024-09-12T17:59:07Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Learning Multimodal Latent Dynamics for Human-Robot Interaction [19.803547418450236]
This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI)
We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents.
We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.
arXiv Detail & Related papers (2023-11-27T23:56:59Z) - SynH2R: Synthesizing Hand-Object Motions for Learning Human-to-Robot
Handovers [37.49601724575655]
Vision-based human-to-robot handover is an important and challenging task in human-robot interaction.
We introduce a framework that can generate plausible human grasping motions suitable for training the robot.
This allows us to generate synthetic training and testing data with 100x more objects than previous work.
arXiv Detail & Related papers (2023-11-09T18:57:02Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Motron: Multimodal Probabilistic Human Motion Forecasting [30.154996245556532]
Motron is a graph-structured model that captures human's multimodality.
It outputs deterministic motions and corresponding confidence values for each mode.
We demonstrate the performance of our model on several challenging real-world motion forecasting datasets.
arXiv Detail & Related papers (2022-03-08T14:58:41Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z) - Hyperparameters optimization for Deep Learning based emotion prediction
for Human Robot Interaction [0.2549905572365809]
We have proposed an Inception module based Convolutional Neural Network Architecture.
The model is implemented in a humanoid robot, NAO in real time and robustness of the model is evaluated.
arXiv Detail & Related papers (2020-01-12T05:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.