Unity RL Playground: A Versatile Reinforcement Learning Framework for Mobile Robots
- URL: http://arxiv.org/abs/2503.05146v1
- Date: Fri, 07 Mar 2025 05:08:23 GMT
- Title: Unity RL Playground: A Versatile Reinforcement Learning Framework for Mobile Robots
- Authors: Linqi Ye, Rankun Li, Xiaowen Hu, Jiayi Li, Boyang Xing, Yan Peng, Bin Liang,
- Abstract summary: This paper introduces Unity RL Playground, an open-source reinforcement learning framework built on top of Unity ML-Agents.<n>Unity RL Playground automates the process of training mobile robots to perform various locomotion tasks.<n>Key features include one-click training for imported robot models, universal compatibility with diverse robot configurations, multi-mode motion learning capabilities.
- Score: 9.924002744810506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces Unity RL Playground, an open-source reinforcement learning framework built on top of Unity ML-Agents. Unity RL Playground automates the process of training mobile robots to perform various locomotion tasks such as walking, running, and jumping in simulation, with the potential for seamless transfer to real hardware. Key features include one-click training for imported robot models, universal compatibility with diverse robot configurations, multi-mode motion learning capabilities, and extreme performance testing to aid in robot design optimization and morphological evolution. The attached video can be found at https://linqi-ye.github.io/video/iros25.mp4 and the code is coming soon.
Related papers
- GR00T N1: An Open Foundation Model for Generalist Humanoid Robots [133.23509142762356]
General-purpose robots need a versatile body and an intelligent mind.
Recent advancements in humanoid robots have shown great promise as a hardware platform for building generalist autonomy.
We introduce GR00T N1, an open foundation model for humanoid robots.
arXiv Detail & Related papers (2025-03-18T21:06:21Z) - VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation [53.63540587160549]
VidBot is a framework enabling zero-shot robotic manipulation using learned 3D affordance from in-the-wild monocular RGB-only human videos.
VidBot paves the way for leveraging everyday human videos to make robot learning more scalable.
arXiv Detail & Related papers (2025-03-10T10:04:58Z) - One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion [18.556470359899855]
We introduce URMA, the Unified Robot Morphology Architecture.
Our framework brings the end-to-end Multi-Task Reinforcement Learning approach to the realm of legged robots.
We show that URMA can learn a locomotion policy on multiple embodiments that can be easily transferred to unseen robot platforms.
arXiv Detail & Related papers (2024-09-10T09:44:15Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior [14.114972332185044]
This paper introduces the Versatile Motion prior (VIM) - a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks.
Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions.
Our evaluations of the VIM framework span both simulation environments and real-world deployment.
arXiv Detail & Related papers (2023-10-02T17:59:24Z) - RoboCat: A Self-Improving Generalist Agent for Robotic Manipulation [33.10577695383743]
We propose a multi-embodiment, multi-task generalist agent for robotic manipulation called RoboCat.
This data spans a large repertoire of motor control skills from simulated and real robotic arms with varying sets of observations and actions.
With RoboCat, we demonstrate the ability to generalise to new tasks and robots, both zero-shot as well as through adaptation using only 100-1000 examples.
arXiv Detail & Related papers (2023-06-20T17:35:20Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - Masked World Models for Visual Control [90.13638482124567]
We introduce a visual model-based RL framework that decouples visual representation learning and dynamics learning.
We demonstrate that our approach achieves state-of-the-art performance on a variety of visual robotic tasks.
arXiv Detail & Related papers (2022-06-28T18:42:27Z) - RL STaR Platform: Reinforcement Learning for Simulation based Training
of Robots [3.249853429482705]
Reinforcement learning (RL) is a promising field to enhance robotic autonomy and decision making capabilities for space robotics.
This paper introduces the RL STaR platform, and how researchers can use it through a demonstration.
arXiv Detail & Related papers (2020-09-21T03:09:53Z) - robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots [0.5161531917413708]
We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
arXiv Detail & Related papers (2020-07-06T13:51:33Z) - Learning to Walk in the Real World with Minimal Human Effort [80.7342153519654]
We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
arXiv Detail & Related papers (2020-02-20T03:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.