A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning
- URL: http://arxiv.org/abs/2208.07860v1
- Date: Tue, 16 Aug 2022 17:37:36 GMT
- Title: A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning
- Authors: Laura Smith, Ilya Kostrikov, Sergey Levine
- Abstract summary: Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
- Score: 86.06110576808824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning is a promising approach to learning policies in
uncontrolled environments that do not require domain knowledge. Unfortunately,
due to sample inefficiency, deep RL applications have primarily focused on
simulated environments. In this work, we demonstrate that the recent
advancements in machine learning algorithms and libraries combined with a
carefully tuned robot controller lead to learning quadruped locomotion in only
20 minutes in the real world. We evaluate our approach on several indoor and
outdoor terrains which are known to be challenging for classical model-based
controllers. We observe the robot to be able to learn walking gait consistently
on all of these terrains. Finally, we evaluate our design decisions in a
simulated environment.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Adaptive Mobile Manipulation for Articulated Objects In the Open World [37.34288363863099]
We introduce Open-World Mobile Manipulation System to tackle realistic articulated object operation.
The system is able to increase success rate from 50% of BC pre-training to 95% using online adaptation.
arXiv Detail & Related papers (2024-01-25T18:59:44Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Real-World Humanoid Locomotion with Reinforcement Learning [92.85934954371099]
We present a fully learning-based approach for real-world humanoid locomotion.
Our controller can walk over various outdoor terrains, is robust to external disturbances, and can adapt in context.
arXiv Detail & Related papers (2023-03-06T18:59:09Z) - Discrete Control in Real-World Driving Environments using Deep
Reinforcement Learning [2.467408627377504]
We introduce a framework (perception, planning, and control) in a real-world driving environment that transfers the real-world environments into gaming environments.
We propose variations of existing Reinforcement Learning (RL) algorithms in a multi-agent setting to learn and execute the discrete control in real-world environments.
arXiv Detail & Related papers (2022-11-29T04:24:03Z) - Learning Perception-Aware Agile Flight in Cluttered Environments [38.59659342532348]
We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in cluttered environments.
Our approach tightly couples perception and control, showing a significant advantage in computation speed (10x faster) and success rate.
We demonstrate the closed-loop control performance using a physical quadrotor and hardware-in-the-loop simulation at speeds up to 50km/h.
arXiv Detail & Related papers (2022-10-04T18:18:58Z) - Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in
Dynamic Environments [25.366480092589022]
A quadrupedal robot must exhibit robust and agile walking behaviors in response to environmental clutter and moving obstacles.
We present a hierarchical learning framework, named PRELUDE, which decomposes the problem of perceptive locomotion into high-level decision-making.
We demonstrate the effectiveness of our approach in simulation and with hardware experiments.
arXiv Detail & Related papers (2022-09-19T17:55:07Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z) - Learning to Walk in the Real World with Minimal Human Effort [80.7342153519654]
We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
arXiv Detail & Related papers (2020-02-20T03:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.