An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples
- URL: http://arxiv.org/abs/2110.14998v1
- Date: Thu, 28 Oct 2021 10:14:47 GMT
- Title: An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples
- Authors: Daniel Felipe Ordo\~nez Apraez, Antonio Agudo, Francesc Moreno-Noguer
and Mario Martin
- Abstract summary: This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
- Score: 38.81854337592694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning controllers that reproduce legged locomotion in nature have been a
long-time goal in robotics and computer graphics. While yielding promising
results, recent approaches are not yet flexible enough to be applicable to
legged systems of different morphologies. This is partly because they often
rely on precise motion capture references or elaborate learning environments
that ensure the naturality of the emergent locomotion gaits but prevent
generalization. This work proposes a generic approach for ensuring realism in
locomotion by guiding the learning process with the spring-loaded inverted
pendulum model as a reference. Leveraging on the exploration capacities of
Reinforcement Learning (RL), we learn a control policy that fills in the
information gap between the template model and full-body dynamics required to
maintain stable and periodic locomotion. The proposed approach can be applied
to robots of different sizes and morphologies and adapted to any RL technique
and control architecture. We present experimental results showing that even in
a model-free setup and with a simple reactive control architecture, the learned
policies can generate realistic and energy-efficient locomotion gaits for a
bipedal and a quadrupedal robot. And most importantly, this is achieved without
using motion capture, strong constraints in the dynamics or kinematics of the
robot, nor prescribing limb coordination. We provide supplemental videos for
qualitative analysis of the naturality of the learned gaits.
Related papers
- One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion [18.556470359899855]
We introduce URMA, the Unified Robot Morphology Architecture.
Our framework brings the end-to-end Multi-Task Reinforcement Learning approach to the realm of legged robots.
We show that URMA can learn a locomotion policy on multiple embodiments that can be easily transferred to unseen robot platforms.
arXiv Detail & Related papers (2024-09-10T09:44:15Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Learning Robust, Agile, Natural Legged Locomotion Skills in the Wild [17.336553501547282]
We propose a new framework for learning robust, agile and natural legged locomotion skills over challenging terrain.
Empirical results on both simulation and real world of a quadruped robot demonstrate that our proposed algorithm enables robustly traversing challenging terrains.
arXiv Detail & Related papers (2023-04-21T11:09:23Z) - GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with
a Centroidal Model [18.66472547798549]
We show how model-free reinforcement learning can be effectively used with a centroidal model to generate robust control policies for quadrupedal locomotion.
We show the potential of the method by demonstrating stepping-stone locomotion, two-legged in-place balance, balance beam locomotion, and sim-to-real transfer without further adaptations.
arXiv Detail & Related papers (2021-04-20T05:55:13Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Meta-Reinforcement Learning for Adaptive Motor Control in Changing Robot
Dynamics and Environments [3.5309638744466167]
This work developed a meta-learning approach that adapts the control policy on the fly to different changing conditions for robust locomotion.
The proposed method constantly updates the interaction model, samples feasible sequences of actions of estimated the state-action trajectories, and then applies the optimal actions to maximize the reward.
arXiv Detail & Related papers (2021-01-19T12:57:12Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.