Learning Quadruped Locomotion Policies using Logical Rules
- URL: http://arxiv.org/abs/2107.10969v3
- Date: Thu, 22 Feb 2024 15:36:44 GMT
- Title: Learning Quadruped Locomotion Policies using Logical Rules
- Authors: David DeFazio, Yohei Hayamizu, and Shiqi Zhang
- Abstract summary: We aim to enable easy gait specification and efficient policy learning for quadruped robots.
Our approach is called RM-based Locomotion Learning(RMLL), and supports adjusting gait frequency at execution time.
We demonstrate these learned policies with a real quadruped robot.
- Score: 2.008081703108095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quadruped animals are capable of exhibiting a diverse range of locomotion
gaits. While progress has been made in demonstrating such gaits on robots,
current methods rely on motion priors, dynamics models, or other forms of
extensive manual efforts. People can use natural language to describe dance
moves. Could one use a formal language to specify quadruped gaits? To this end,
we aim to enable easy gait specification and efficient policy learning.
Leveraging Reward Machines~(RMs) for high-level gait specification over foot
contacts, our approach is called RM-based Locomotion Learning~(RMLL), and
supports adjusting gait frequency at execution time. Gait specification is
enabled through the use of a few logical rules per gait (e.g., alternate
between moving front feet and back feet) and does not require labor-intensive
motion priors. Experimental results in simulation highlight the diversity of
learned gaits (including two novel gaits), their energy consumption and
stability across different terrains, and the superior sample-efficiency when
compared to baselines. We also demonstrate these learned policies with a real
quadruped robot. Video and supplementary materials:
https://sites.google.com/view/rm-locomotion-learning/home
Related papers
- Gaitor: Learning a Unified Representation Across Gaits for Real-World Quadruped Locomotion [61.01039626207952]
We present Gaitor, which learns a disentangled and 2D representation across locomotion gaits.
Gaitor's latent space is readily interpretable and we discover that during gait transitions, novel unseen gaits emerge.
We evaluate Gaitor in both simulation and the real world on the ANYmal C platform.
arXiv Detail & Related papers (2024-05-29T19:02:57Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior [14.114972332185044]
This paper introduces the Versatile Motion prior (VIM) - a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks.
Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions.
Our evaluations of the VIM framework span both simulation environments and real-world deployment.
arXiv Detail & Related papers (2023-10-02T17:59:24Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - VIMA: General Robot Manipulation with Multimodal Prompts [82.01214865117637]
We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts.
We develop a new simulation benchmark that consists of thousands of procedurally-generated tabletop tasks.
We design a transformer-based robot agent, VIMA, that processes these prompts and outputs motor actions autoregressively.
arXiv Detail & Related papers (2022-10-06T17:50:11Z) - Learning Free Gait Transition for Quadruped Robots via Phase-Guided
Controller [4.110347671351065]
We present a novel framework for training a simple control policy for a quadruped robot to locomote in various gaits.
The Black Panther robot, a medium-dog-sized quadruped robot, can perform all learned motor skills while following the velocity commands smoothly and robustly in natural environment.
arXiv Detail & Related papers (2022-01-01T15:15:42Z) - Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged
Robots [71.61319876928009]
We show that learning to minimize energy consumption plays a key role in the emergence of natural locomotion gaits at different speeds in real quadruped robots.
The emergent gaits are structured in ideal terrains and look similar to that of horses and sheep.
The same approach leads to unstructured gaits in rough terrains which is consistent with the findings in animal motor control.
arXiv Detail & Related papers (2021-10-25T17:59:58Z) - Fast and Efficient Locomotion via Learned Gait Transitions [35.86279693549959]
We focus on the problem of developing efficient controllers for quadrupedal robots.
We devise a hierarchical learning framework, in which distinctive locomotion gaits and natural gait transitions emerge automatically.
We show that the learned hierarchical controller consumes much less energy across a wide range of locomotion speed than baseline controllers.
arXiv Detail & Related papers (2021-04-09T23:53:28Z) - Robust High-speed Running for Quadruped Robots via Deep Reinforcement
Learning [7.264355680723856]
In this paper, we explore learning foot positions in Cartesian space for a task of running as fast as possible subject to environmental disturbances.
Compared with other action spaces, we observe less needed reward shaping, much improved sample efficiency, and the emergence of natural gaits such as galloping and bounding.
Policies can be learned in only a few million time steps, even for challenging tasks of running over rough terrain with loads of over 100% of the nominal quadruped mass.
arXiv Detail & Related papers (2021-03-11T06:13:09Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.