Learning to Walk in the Real World with Minimal Human Effort
- URL: http://arxiv.org/abs/2002.08550v3
- Date: Tue, 3 Nov 2020 05:46:51 GMT
- Title: Learning to Walk in the Real World with Minimal Human Effort
- Authors: Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan
- Abstract summary: We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
- Score: 80.7342153519654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable and stable locomotion has been one of the most fundamental
challenges for legged robots. Deep reinforcement learning (deep RL) has emerged
as a promising method for developing such control policies autonomously. In
this paper, we develop a system for learning legged locomotion policies with
deep RL in the real world with minimal human effort. The key difficulties for
on-robot learning systems are automatic data collection and safety. We overcome
these two challenges by developing a multi-task learning procedure and a
safety-constrained RL framework. We tested our system on the task of learning
to walk on three different terrains: flat ground, a soft mattress, and a
doormat with crevices. Our system can automatically and efficiently learn
locomotion skills on a Minitaur robot with little human intervention. The
supplemental video can be found at: \url{https://youtu.be/cwyiq6dCgOc}.
Related papers
- SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience [19.817578964184147]
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs.
We introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.
We demonstrate the effectiveness of our method on a real Solo-12 robot, showcasing its capability to perform a variety of parkour skills such as walking, climbing, leaping, and crawling.
arXiv Detail & Related papers (2024-09-20T17:39:20Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Learning Bipedal Walking On Planned Footsteps For Humanoid Robots [5.127310126394387]
Deep reinforcement learning (RL) based controllers for legged robots have demonstrated impressive robustness for walking in different environments for several robot platforms.
To enable the application of RL policies for humanoid robots in real-world settings, it is crucial to build a system that can achieve robust walking in any direction.
In this paper, we tackle this problem by learning a policy to follow a given step sequence.
We show that simply feeding the upcoming 2 steps to the policy is sufficient to achieve omnidirectional walking, turning in place, standing, and climbing stairs.
arXiv Detail & Related papers (2022-07-26T04:16:00Z) - Human-to-Robot Imitation in the Wild [50.49660984318492]
We propose an efficient one-shot robot learning algorithm, centered around learning from a third-person perspective.
We show one-shot generalization and success in real-world settings, including 20 different manipulation tasks in the wild.
arXiv Detail & Related papers (2022-07-19T17:59:59Z) - Learning Control Policies for Fall prevention and safety in bipedal
locomotion [0.0]
We develop learning-based algorithms capable of synthesizing push recovery control policies for two different kinds of robots.
Our work can be branched into two closely related directions : 1) Learning safe falling and fall prevention strategies for humanoid robots and 2) Learning fall prevention strategies for humans using a robotic assistive devices.
arXiv Detail & Related papers (2022-01-04T22:00:21Z) - Towards General and Autonomous Learning of Core Skills: A Case Study in
Locomotion [19.285099263193622]
We develop a learning framework that can learn sophisticated locomotion behavior for a wide spectrum of legged robots.
Our learning framework relies on a data-efficient, off-policy multi-task RL algorithm and a small set of reward functions that are semantically identical across robots.
For nine different types of robots, including a real-world quadruped robot, we demonstrate that the same algorithm can rapidly learn diverse and reusable locomotion skills.
arXiv Detail & Related papers (2020-08-06T08:23:55Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.