Dynamic Handover: Throw and Catch with Bimanual Hands
- URL: http://arxiv.org/abs/2309.05655v1
- Date: Mon, 11 Sep 2023 17:49:25 GMT
- Title: Dynamic Handover: Throw and Catch with Bimanual Hands
- Authors: Binghao Huang, Yuanpei Chen, Tianyu Wang, Yuzhe Qin, Yaodong Yang,
Nikolay Atanasov, Xiaolong Wang
- Abstract summary: We design a system with two multi-finger hands attached to robot arms to solve this problem.
We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots.
To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object.
- Score: 30.206469112964033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans throw and catch objects all the time. However, such a seemingly common
skill introduces a lot of challenges for robots to achieve: The robots need to
operate such dynamic actions at high-speed, collaborate precisely, and interact
with diverse objects. In this paper, we design a system with two multi-finger
hands attached to robot arms to solve this problem. We train our system using
Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer
to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple
novel algorithm designs including learning a trajectory prediction model for
the object. Such a model can help the robot catcher has a real-time estimation
of where the object will be heading, and then react accordingly. We conduct our
experiments with multiple objects in the real-world system, and show
significant improvements over multiple baselines. Our project page is available
at \url{https://binghao-huang.github.io/dynamic_handover/}.
Related papers
- RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots [25.650235551519952]
We present RoboCasa, a large-scale simulation framework for training generalist robots in everyday environments.
We provide thousands of 3D assets across over 150 object categories and dozens of interactable furniture and appliances.
Our experiments show a clear scaling trend in using synthetically generated robot data for large-scale imitation learning.
arXiv Detail & Related papers (2024-06-04T17:41:31Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - HomeRobot: Open-Vocabulary Mobile Manipulation [107.05702777141178]
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location.
HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch.
arXiv Detail & Related papers (2023-06-20T14:30:32Z) - Affordances from Human Videos as a Versatile Representation for Robotics [31.248842798600606]
We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
arXiv Detail & Related papers (2023-04-17T17:59:34Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Malleable Agents for Re-Configurable Robotic Manipulators [0.0]
We propose an RL agent with sequence neural networks embedded in the deep neural network to adapt to robotic arms with varying number of links.
With the additional tool of domain randomization, this agent adapts to different configurations with varying number/length of links and dynamics noise.
arXiv Detail & Related papers (2022-02-04T21:22:00Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - Learning Cross-Domain Correspondence for Control with Dynamics
Cycle-Consistency [60.39133304370604]
We learn to align dynamic robot behavior across two domains using a cycle-consistency constraint.
Our framework is able to align uncalibrated monocular video of a real robot arm to dynamic state-action trajectories of a simulated arm without paired data.
arXiv Detail & Related papers (2020-12-17T18:22:25Z) - robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots [0.5161531917413708]
We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
arXiv Detail & Related papers (2020-07-06T13:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.