BridgeData V2: A Dataset for Robot Learning at Scale
- URL: http://arxiv.org/abs/2308.12952v3
- Date: Wed, 17 Jan 2024 22:41:29 GMT
- Title: BridgeData V2: A Dataset for Robot Learning at Scale
- Authors: Homer Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Max Du, Chongyi
Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Vuong, Andre He, Vivek Myers,
Kuan Fang, Chelsea Finn, Sergey Levine
- Abstract summary: BridgeData V2 is a large and diverse dataset of robotic manipulation behaviors.
It contains 60,096 trajectories collected across 24 environments on a publicly available low-cost robot.
- Score: 73.86688388408021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce BridgeData V2, a large and diverse dataset of robotic
manipulation behaviors designed to facilitate research on scalable robot
learning. BridgeData V2 contains 60,096 trajectories collected across 24
environments on a publicly available low-cost robot. BridgeData V2 provides
extensive task and environment variability, leading to skills that can
generalize across environments, domains, and institutions, making the dataset a
useful resource for a broad range of researchers. Additionally, the dataset is
compatible with a wide variety of open-vocabulary, multi-task learning methods
conditioned on goal images or natural language instructions. In our
experiments, we train 6 state-of-the-art imitation learning and offline
reinforcement learning methods on our dataset, and find that they succeed on a
suite of tasks requiring varying amounts of generalization. We also demonstrate
that the performance of these methods improves with more data and higher
capacity models, and that training on a greater variety of skills leads to
improved generalization. By publicly sharing BridgeData V2 and our pre-trained
models, we aim to accelerate research in scalable robot learning methods.
Project page at https://rail-berkeley.github.io/bridgedata
Related papers
- Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation [49.03165169369552]
By training a single policy across many different kinds of robots, a robot learning method can leverage much broader and more diverse datasets.
We propose CrossFormer, a scalable and flexible transformer-based policy that can consume data from any embodiment.
We demonstrate that the same network weights can control vastly different robots, including single and dual arm manipulation systems, wheeled robots, quadcopters, and quadrupeds.
arXiv Detail & Related papers (2024-08-21T17:57:51Z) - Scaling Robot Learning with Semantically Imagined Experience [21.361979238427722]
Recent advances in robot learning have shown promise in enabling robots to perform manipulation tasks.
One of the key contributing factors to this progress is the scale of robot data used to train the models.
We propose an alternative route and leverage text-to-image foundation models widely used in computer vision and natural language processing.
arXiv Detail & Related papers (2023-02-22T18:47:51Z) - RT-1: Robotics Transformer for Real-World Control at Scale [98.09428483862165]
We present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties.
We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks.
arXiv Detail & Related papers (2022-12-13T18:55:15Z) - Development of a robust cascaded architecture for intelligent robot
grasping using limited labelled data [0.0]
In the case of robots, we can not afford to spend that much time on making it to learn how to grasp objects effectively.
We propose an efficient learning architecture based on VQVAE so that robots can be taught with sufficient data corresponding to correct grasping.
A semi-supervised learning based model which has much more generalization capability even with limited labelled data set has been investigated.
arXiv Detail & Related papers (2021-11-06T11:01:15Z) - Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain
Datasets [122.85598648289789]
We study how multi-domain and multi-task datasets can improve the learning of new tasks in new environments.
We also find that data for only a few tasks in a new domain can bridge the domain gap and make it possible for a robot to perform a variety of prior tasks that were only seen in other domains.
arXiv Detail & Related papers (2021-09-27T23:42:12Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - COG: Connecting New Skills to Past Experience with Offline Reinforcement
Learning [78.13740204156858]
We show that we can reuse prior data to extend new skills simply through dynamic programming.
We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task.
We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands.
arXiv Detail & Related papers (2020-10-27T17:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.