BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO
- URL: http://arxiv.org/abs/2406.17490v1
- Date: Tue, 25 Jun 2024 12:17:44 GMT
- Title: BricksRL: A Platform for Democratizing Robotics and Reinforcement Learning Research and Education with LEGO
- Authors: Sebastian Dittert, Vincent Moens, Gianni De Fabritiis,
- Abstract summary: We present BricksRL, a platform designed to democratize access to robotics for reinforcement learning research and education.
BricksRL facilitates the creation, design, and training of custom LEGO robots in the real world by interfacing them with the TorchRL library for reinforcement learning agents.
- Score: 5.052293146674793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present BricksRL, a platform designed to democratize access to robotics for reinforcement learning research and education. BricksRL facilitates the creation, design, and training of custom LEGO robots in the real world by interfacing them with the TorchRL library for reinforcement learning agents. The integration of TorchRL with the LEGO hubs, via Bluetooth bidirectional communication, enables state-of-the-art reinforcement learning training on GPUs for a wide variety of LEGO builds. This offers a flexible and cost-efficient approach for scaling and also provides a robust infrastructure for robot-environment-algorithm communication. We present various experiments across tasks and robot configurations, providing built plans and training results. Furthermore, we demonstrate that inexpensive LEGO robots can be trained end-to-end in the real world to achieve simple tasks, with training times typically under 120 minutes on a normal laptop. Moreover, we show how users can extend the capabilities, exemplified by the successful integration of non-LEGO sensors. By enhancing accessibility to both robotics and reinforcement learning, BricksRL establishes a strong foundation for democratized robotic learning in research and educational settings.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Solving Multi-Goal Robotic Tasks with Decision Transformer [0.0]
We introduce a novel adaptation of the decision transformer architecture for offline multi-goal reinforcement learning in robotics.
Our approach integrates goal-specific information into the decision transformer, allowing it to handle complex tasks in an offline setting.
arXiv Detail & Related papers (2024-10-08T20:35:30Z) - AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents [109.3804962220498]
AutoRT is a system to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision.
We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies.
We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
arXiv Detail & Related papers (2024-01-23T18:45:54Z) - GRID: A Platform for General Robot Intelligence Development [22.031523876249484]
We present a new platform for General Robot Intelligence Development (GRID)
The platform enables robots to learn, compose and adapt skills to their physical capabilities, environmental constraints and goals.
GRID is designed from the ground up to accommodate new types of robots, vehicles, hardware platforms and software protocols.
arXiv Detail & Related papers (2023-10-02T04:09:27Z) - A Lightweight and Transferable Design for Robust LEGO Manipulation [10.982854061044339]
This paper investigates safe and efficient robotic Lego manipulation.
An end-of-arm tool (EOAT) is designed, which reduces the problem dimension and allows large industrial robots to manipulate small Lego bricks.
Experiments demonstrate that the EOAT can reliably manipulate Lego bricks and the learning framework can effectively and safely improve the manipulation performance to a 100% success rate.
arXiv Detail & Related papers (2023-09-05T16:11:37Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - SurRoL: An Open-source Reinforcement Learning Centered and dVRK
Compatible Platform for Surgical Robot Learning [78.76052604441519]
SurRoL is an RL-centered simulation platform for surgical robot learning compatible with the da Vinci Research Kit (dVRK)
Ten learning-based surgical tasks are built in the platform, which are common in the real autonomous surgical execution.
We evaluate SurRoL using RL algorithms in simulation, provide in-depth analysis, deploy the trained policies on the real dVRK, and show that our SurRoL achieves better transferability in the real world.
arXiv Detail & Related papers (2021-08-30T07:43:47Z) - RL STaR Platform: Reinforcement Learning for Simulation based Training
of Robots [3.249853429482705]
Reinforcement learning (RL) is a promising field to enhance robotic autonomy and decision making capabilities for space robotics.
This paper introduces the RL STaR platform, and how researchers can use it through a demonstration.
arXiv Detail & Related papers (2020-09-21T03:09:53Z) - robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots [0.5161531917413708]
We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
arXiv Detail & Related papers (2020-07-06T13:51:33Z) - Learning to Walk in the Real World with Minimal Human Effort [80.7342153519654]
We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
arXiv Detail & Related papers (2020-02-20T03:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.