Learning Agile Locomotion via Adversarial Training
- URL: http://arxiv.org/abs/2008.00603v1
- Date: Mon, 3 Aug 2020 01:20:37 GMT
- Title: Learning Agile Locomotion via Adversarial Training
- Authors: Yujin Tang, Jie Tan and Tatsuya Harada
- Abstract summary: In this paper, we present a multi-agent learning system, in which a quadruped robot (protagonist) learns to chase another robot (adversary) while the latter learns to escape.
We find that this adversarial training process not only encourages agile behaviors but also effectively alleviates the laborious environment design effort.
In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.
- Score: 59.03007947334165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing controllers for agile locomotion is a long-standing challenge for
legged robots. Reinforcement learning (RL) and Evolution Strategy (ES) hold the
promise of automating the design process of such controllers. However,
dedicated and careful human effort is required to design training environments
to promote agility. In this paper, we present a multi-agent learning system, in
which a quadruped robot (protagonist) learns to chase another robot (adversary)
while the latter learns to escape. We find that this adversarial training
process not only encourages agile behaviors but also effectively alleviates the
laborious environment design effort. In contrast to prior works that used only
one adversary, we find that training an ensemble of adversaries, each of which
specializes in a different escaping strategy, is essential for the protagonist
to master agility. Through extensive experiments, we show that the locomotion
controller learned with adversarial training significantly outperforms
carefully designed baselines.
Related papers
- SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience [19.817578964184147]
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs.
We introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.
We demonstrate the effectiveness of our method on a real Solo-12 robot, showcasing its capability to perform a variety of parkour skills such as walking, climbing, leaping, and crawling.
arXiv Detail & Related papers (2024-09-20T17:39:20Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning [26.13655448415553]
Deep Reinforcement Learning (Deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot.
We used Deep RL to train a humanoid robot with 20 actuated joints to play a simplified one-versus-one (1v1) soccer game.
The resulting agent exhibits robust and dynamic movement skills such as rapid fall recovery, walking, turning, kicking and more.
arXiv Detail & Related papers (2023-04-26T16:25:54Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Adaptation of Quadruped Robot Locomotion with Meta-Learning [64.71260357476602]
We demonstrate that meta-reinforcement learning can be used to successfully train a robot capable to solve a wide range of locomotion tasks.
The performance of the meta-trained robot is similar to that of a robot that is trained on a single task.
arXiv Detail & Related papers (2021-07-08T10:37:18Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.