RMA: Rapid Motor Adaptation for Legged Robots
- URL: http://arxiv.org/abs/2107.04034v1
- Date: Thu, 8 Jul 2021 17:59:59 GMT
- Title: RMA: Rapid Motor Adaptation for Legged Robots
- Authors: Ashish Kumar, Zipeng Fu, Deepak Pathak, Jitendra Malik
- Abstract summary: This paper presents Rapid Motor Adaptation (RMA) algorithm to solve this problem of real-time online adaptation in quadruped robots.
RMA is trained completely in simulation without using any domain knowledge like reference trajectories or predefined foot trajectory generators.
We train RMA on a varied terrain generator using bioenergetics-inspired rewards and deploy it on a variety of difficult terrains.
- Score: 71.61319876928009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Successful real-world deployment of legged robots would require them to adapt
in real-time to unseen scenarios like changing terrains, changing payloads,
wear and tear. This paper presents Rapid Motor Adaptation (RMA) algorithm to
solve this problem of real-time online adaptation in quadruped robots. RMA
consists of two components: a base policy and an adaptation module. The
combination of these components enables the robot to adapt to novel situations
in fractions of a second. RMA is trained completely in simulation without using
any domain knowledge like reference trajectories or predefined foot trajectory
generators and is deployed on the A1 robot without any fine-tuning. We train
RMA on a varied terrain generator using bioenergetics-inspired rewards and
deploy it on a variety of difficult terrains including rocky, slippery,
deformable surfaces in environments with grass, long vegetation, concrete,
pebbles, stairs, sand, etc. RMA shows state-of-the-art performance across
diverse real-world as well as simulation experiments. Video results at
https://ashish-kmr.github.io/rma-legged-robots/
Related papers
- RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Dynamic Handover: Throw and Catch with Bimanual Hands [30.206469112964033]
We design a system with two multi-finger hands attached to robot arms to solve this problem.
We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots.
To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object.
arXiv Detail & Related papers (2023-09-11T17:49:25Z) - HomeRobot: Open-Vocabulary Mobile Manipulation [107.05702777141178]
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location.
HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch.
arXiv Detail & Related papers (2023-06-20T14:30:32Z) - Adapting Rapid Motor Adaptation for Bipedal Robots [73.5914982741483]
We leverage recent advances in rapid adaptation for locomotion control, and extend them to work on bipedal robots.
A-RMA adapts the base policy for the imperfect extrinsics estimator by finetuning it using model-free RL.
We demonstrate that A-RMA outperforms a number of RL-based baseline controllers and model-based controllers in simulation.
arXiv Detail & Related papers (2022-05-30T17:59:09Z) - MetaMorph: Learning Universal Controllers with Transformers [45.478223199658785]
In robotics we primarily train a single robot for a single task.
modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies.
We propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space.
arXiv Detail & Related papers (2022-03-22T17:58:31Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - Kimera-Multi: a System for Distributed Multi-Robot Metric-Semantic
Simultaneous Localization and Mapping [57.173793973480656]
We present the first fully distributed multi-robot system for dense metric-semantic SLAM.
Our system, dubbed Kimera-Multi, is implemented by a team of robots equipped with visual-inertial sensors.
Kimera-Multi builds a 3D mesh model of the environment in real-time, where each face of the mesh is annotated with a semantic label.
arXiv Detail & Related papers (2020-11-08T21:38:12Z) - robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots [0.5161531917413708]
We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
arXiv Detail & Related papers (2020-07-06T13:51:33Z) - Smooth Exploration for Robotic Reinforcement Learning [11.215352918313577]
Reinforcement learning (RL) enables robots to learn skills from interactions with the real world.
In practice, the unstructured step-based exploration used in Deep RL leads to jerky motion patterns on real robots.
We address these issues by adapting state-dependent exploration (SDE) to current Deep RL algorithms.
arXiv Detail & Related papers (2020-05-12T12:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.