Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain
Datasets
- URL: http://arxiv.org/abs/2109.13396v1
- Date: Mon, 27 Sep 2021 23:42:12 GMT
- Title: Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain
Datasets
- Authors: Frederik Ebert, Yanlai Yang, Karl Schmeckpeper, Bernadette Bucher,
Georgios Georgakis, Kostas Daniilidis, Chelsea Finn, Sergey Levine
- Abstract summary: We study how multi-domain and multi-task datasets can improve the learning of new tasks in new environments.
We also find that data for only a few tasks in a new domain can bridge the domain gap and make it possible for a robot to perform a variety of prior tasks that were only seen in other domains.
- Score: 122.85598648289789
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robot learning holds the promise of learning policies that generalize
broadly. However, such generalization requires sufficiently diverse datasets of
the task of interest, which can be prohibitively expensive to collect. In other
fields, such as computer vision, it is common to utilize shared, reusable
datasets, such as ImageNet, to overcome this challenge, but this has proven
difficult in robotics. In this paper, we ask: what would it take to enable
practical data reuse in robotics for end-to-end skill learning? We hypothesize
that the key is to use datasets with multiple tasks and multiple domains, such
that a new user that wants to train their robot to perform a new task in a new
domain can include this dataset in their training process and benefit from
cross-task and cross-domain generalization. To evaluate this hypothesis, we
collect a large multi-domain and multi-task dataset, with 7,200 demonstrations
constituting 71 tasks across 10 environments, and empirically study how this
data can improve the learning of new tasks in new environments. We find that
jointly training with the proposed dataset and 50 demonstrations of a
never-before-seen task in a new domain on average leads to a 2x improvement in
success rate compared to using target domain data alone. We also find that data
for only a few tasks in a new domain can bridge the domain gap and make it
possible for a robot to perform a variety of prior tasks that were only seen in
other domains. These results suggest that reusing diverse multi-task and
multi-domain datasets, including our open-source dataset, may pave the way for
broader robot generalization, eliminating the need to re-collect data for each
new robot learning project.
Related papers
- PoCo: Policy Composition from and for Heterogeneous Robot Learning [44.1315170137613]
Current methods usually collect and pool all data from one domain to train a single policy.
We present a flexible approach, dubbed Policy Composition, to combine information across diverse modalities and domains.
Our method can use task-level composition for multi-task manipulation and be composed with analytic cost functions to adapt policy behaviors at inference time.
arXiv Detail & Related papers (2024-02-04T14:51:49Z) - BridgeData V2: A Dataset for Robot Learning at Scale [73.86688388408021]
BridgeData V2 is a large and diverse dataset of robotic manipulation behaviors.
It contains 60,096 trajectories collected across 24 environments on a publicly available low-cost robot.
arXiv Detail & Related papers (2023-08-24T17:41:20Z) - RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in
One-Shot [56.130215236125224]
A key challenge in robotic manipulation in open domains is how to acquire diverse and generalizable skills for robots.
Recent research in one-shot imitation learning has shown promise in transferring trained policies to new tasks based on demonstrations.
This paper aims to unlock the potential for an agent to generalize to hundreds of real-world skills with multi-modal perception.
arXiv Detail & Related papers (2023-07-02T15:33:31Z) - Understanding the World Through Action [91.3755431537592]
I will argue that a general, principled, and powerful framework for utilizing unlabeled data can be derived from reinforcement learning.
I will discuss how such a procedure is more closely aligned with potential downstream tasks.
arXiv Detail & Related papers (2021-10-24T22:33:52Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Efficient Self-Supervised Data Collection for Offline Robot Learning [17.461103383630853]
A practical approach to robot reinforcement learning is to first collect a large batch of real or simulated robot interaction data.
We develop a simple-yet-effective goal-conditioned reinforcement-learning method that actively focuses data collection on novel observations.
arXiv Detail & Related papers (2021-05-10T18:42:58Z) - COG: Connecting New Skills to Past Experience with Offline Reinforcement
Learning [78.13740204156858]
We show that we can reuse prior data to extend new skills simply through dynamic programming.
We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task.
We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands.
arXiv Detail & Related papers (2020-10-27T17:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.