Learning Multi-Arm Manipulation Through Collaborative Teleoperation
- URL: http://arxiv.org/abs/2012.06738v1
- Date: Sat, 12 Dec 2020 05:43:43 GMT
- Title: Learning Multi-Arm Manipulation Through Collaborative Teleoperation
- Authors: Albert Tung, Josiah Wong, Ajay Mandlekar, Roberto Mart\'in-Mart\'in,
Yuke Zhu, Li Fei-Fei, Silvio Savarese
- Abstract summary: Imitation Learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks.
Many real-world tasks require multiple arms, such as lifting a heavy object or assembling a desk.
We present Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms.
- Score: 63.35924708783826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imitation Learning (IL) is a powerful paradigm to teach robots to perform
manipulation tasks by allowing them to learn from human demonstrations
collected via teleoperation, but has mostly been limited to single-arm
manipulation. However, many real-world tasks require multiple arms, such as
lifting a heavy object or assembling a desk. Unfortunately, applying IL to
multi-arm manipulation tasks has been challenging -- asking a human to control
more than one robotic arm can impose significant cognitive burden and is often
only possible for a maximum of two robot arms. To address these challenges, we
present Multi-Arm RoboTurk (MART), a multi-user data collection platform that
allows multiple remote users to simultaneously teleoperate a set of robotic
arms and collect demonstrations for multi-arm tasks. Using MART, we collected
demonstrations for five novel two and three-arm tasks from several
geographically separated users. From our data we arrived at a critical insight:
most multi-arm tasks do not require global coordination throughout its full
duration, but only during specific moments. We show that learning from such
data consequently presents challenges for centralized agents that directly
attempt to model all robot actions simultaneously, and perform a comprehensive
study of different policy architectures with varying levels of centralization
on our tasks. Finally, we propose and evaluate a base-residual policy framework
that allows trained policies to better adapt to the mixed coordination setting
common in multi-arm manipulation, and show that a centralized policy augmented
with a decentralized residual model outperforms all other models on our set of
benchmark tasks. Additional results and videos at
https://roboturk.stanford.edu/multiarm .
Related papers
- Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation [49.03165169369552]
By training a single policy across many different kinds of robots, a robot learning method can leverage much broader and more diverse datasets.
We propose CrossFormer, a scalable and flexible transformer-based policy that can consume data from any embodiment.
We demonstrate that the same network weights can control vastly different robots, including single and dual arm manipulation systems, wheeled robots, quadcopters, and quadrupeds.
arXiv Detail & Related papers (2024-08-21T17:57:51Z) - RoboAgent: Generalization and Efficiency in Robot Manipulation via
Semantic Augmentations and Action Chunking [54.776890150458385]
We develop an efficient system for training universal agents capable of multi-task manipulation skills.
We are able to train a single agent capable of 12 unique skills, and demonstrate its generalization over 38 tasks.
On average, RoboAgent outperforms prior methods by over 40% in unseen situations.
arXiv Detail & Related papers (2023-09-05T03:14:39Z) - LEMMA: Learning Language-Conditioned Multi-Robot Manipulation [21.75163634731677]
LanguagE-Conditioned Multi-robot MAnipulation (LEMMA)
LeMMA features 8 types of procedurally generated tasks with varying degree of complexity.
For each task, we provide 800 expert demonstrations and human instructions for training and evaluations.
arXiv Detail & Related papers (2023-08-02T04:37:07Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action
Spaces [9.578169216444813]
This paper proposes an approach to coordinating multi-robot manipulation through learned latent action spaces that are shared across different agents.
We validate our method in simulated multi-robot manipulation tasks and demonstrate improvement over previous baselines in terms of sample efficiency and learning performance.
arXiv Detail & Related papers (2022-11-28T23:20:47Z) - Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning [23.164743388342803]
We study how to solve bi-manual tasks using reinforcement learning trained in simulation.
We also discuss modifications to our simulated environment which lead to effective training of RL policies.
In this work, we design a Connect Task, where the aim is for two robot arms to pick up and attach two blocks with magnetic connection points.
arXiv Detail & Related papers (2022-03-15T21:49:20Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - Learning to Centralize Dual-Arm Assembly [0.6091702876917281]
This work focuses on assembly with humanoid robots by providing a framework for dual-arm peg-in-hole manipulation.
We reduce modeling effort to a minimum by using sparse rewards only.
We demonstrate the effectiveness of the framework on dual-arm peg-in-hole and analyze sample efficiency and success rates for different action spaces.
arXiv Detail & Related papers (2021-10-08T09:59:12Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.