RMA: Rapid Motor Adaptation for Legged Robots
- URL: http://arxiv.org/abs/2107.04034v1
- Date: Thu, 8 Jul 2021 17:59:59 GMT
- Title: RMA: Rapid Motor Adaptation for Legged Robots
- Authors: Ashish Kumar, Zipeng Fu, Deepak Pathak, Jitendra Malik
- Abstract summary: This paper presents Rapid Motor Adaptation (RMA) algorithm to solve this problem of real-time online adaptation in quadruped robots.
RMA is trained completely in simulation without using any domain knowledge like reference trajectories or predefined foot trajectory generators.
We train RMA on a varied terrain generator using bioenergetics-inspired rewards and deploy it on a variety of difficult terrains.
- Score: 71.61319876928009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Successful real-world deployment of legged robots would require them to adapt
in real-time to unseen scenarios like changing terrains, changing payloads,
wear and tear. This paper presents Rapid Motor Adaptation (RMA) algorithm to
solve this problem of real-time online adaptation in quadruped robots. RMA
consists of two components: a base policy and an adaptation module. The
combination of these components enables the robot to adapt to novel situations
in fractions of a second. RMA is trained completely in simulation without using
any domain knowledge like reference trajectories or predefined foot trajectory
generators and is deployed on the A1 robot without any fine-tuning. We train
RMA on a varied terrain generator using bioenergetics-inspired rewards and
deploy it on a variety of difficult terrains including rocky, slippery,
deformable surfaces in environments with grass, long vegetation, concrete,
pebbles, stairs, sand, etc. RMA shows state-of-the-art performance across
diverse real-world as well as simulation experiments. Video results at
https://ashish-kmr.github.io/rma-legged-robots/
Related papers
- Sim-to-Real Transfer for Mobile Robots with Reinforcement Learning: from NVIDIA Isaac Sim to Gazebo and Real ROS 2 Robots [1.2773537446441052]
This article focuses on demonstrating the applications of Isaac in local planning and obstacle avoidance.
We benchmark end-to-end policies with the state-of-the-art Nav2, navigation stack in Robot Operating System (ROS)
We also cover the sim-to-real transfer process by demonstrating zero-shot transferability of policies trained in the Isaac simulator to real-world robots.
arXiv Detail & Related papers (2025-01-06T10:26:16Z) - The One RING: a Robotic Indoor Navigation Generalist [58.431772508378344]
RING (Robotic Indoor Navigation Generalist) is an embodiment-agnostic policy.
It is trained solely in simulation with diverse randomly embodiments at scale.
It achieves an average of 72.1% and 78.9% success rate across 5 embodiments in simulation and 4 robot platforms in the real world.
arXiv Detail & Related papers (2024-12-18T23:15:41Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - HomeRobot: Open-Vocabulary Mobile Manipulation [107.05702777141178]
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location.
HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch.
arXiv Detail & Related papers (2023-06-20T14:30:32Z) - Adapting Rapid Motor Adaptation for Bipedal Robots [73.5914982741483]
We leverage recent advances in rapid adaptation for locomotion control, and extend them to work on bipedal robots.
A-RMA adapts the base policy for the imperfect extrinsics estimator by finetuning it using model-free RL.
We demonstrate that A-RMA outperforms a number of RL-based baseline controllers and model-based controllers in simulation.
arXiv Detail & Related papers (2022-05-30T17:59:09Z) - MetaMorph: Learning Universal Controllers with Transformers [45.478223199658785]
In robotics we primarily train a single robot for a single task.
modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies.
We propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space.
arXiv Detail & Related papers (2022-03-22T17:58:31Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots [0.5161531917413708]
We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
arXiv Detail & Related papers (2020-07-06T13:51:33Z) - Smooth Exploration for Robotic Reinforcement Learning [11.215352918313577]
Reinforcement learning (RL) enables robots to learn skills from interactions with the real world.
In practice, the unstructured step-based exploration used in Deep RL leads to jerky motion patterns on real robots.
We address these issues by adapting state-dependent exploration (SDE) to current Deep RL algorithms.
arXiv Detail & Related papers (2020-05-12T12:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.