Learning to Exploit Elastic Actuators for Quadruped Locomotion
- URL: http://arxiv.org/abs/2209.07171v3
- Date: Sun, 20 Aug 2023 14:46:59 GMT
- Title: Learning to Exploit Elastic Actuators for Quadruped Locomotion
- Authors: Antonin Raffin, Daniel Seidel, Jens Kober, Alin Albu-Sch\"affer,
Jo\~ao Silv\'erio, Freek Stulp
- Abstract summary: Spring-based actuators in legged locomotion provide energy-efficiency and improved performance, but increase the difficulty of controller design.
We propose to learn model-free controllers directly on the real robot.
We evaluate the proposed approach on the DLR elastic quadruped bert.
- Score: 7.9585932082270014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spring-based actuators in legged locomotion provide energy-efficiency and
improved performance, but increase the difficulty of controller design. While
previous work has focused on extensive modeling and simulation to find optimal
controllers for such systems, we propose to learn model-free controllers
directly on the real robot. In our approach, gaits are first synthesized by
central pattern generators (CPGs), whose parameters are optimized to quickly
obtain an open-loop controller that achieves efficient locomotion. Then, to
make this controller more robust and further improve the performance, we use
reinforcement learning to close the loop, to learn corrective actions on top of
the CPGs. We evaluate the proposed approach on the DLR elastic quadruped bert.
Our results in learning trotting and pronking gaits show that exploitation of
the spring actuator dynamics emerges naturally from optimizing for dynamic
motions, yielding high-performing locomotion, particularly the fastest walking
gait recorded on bert, despite being model-free. The whole process takes no
more than 1.5 hours on the real robot and results in natural-looking gaits.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - TLControl: Trajectory and Language Control for Human Motion Synthesis [68.09806223962323]
We present TLControl, a novel method for realistic human motion synthesis.
It incorporates both low-level Trajectory and high-level Language semantics controls.
It is practical for interactive and high-quality animation generation.
arXiv Detail & Related papers (2023-11-28T18:54:16Z) - Reaching the Limit in Autonomous Racing: Optimal Control versus
Reinforcement Learning [66.10854214036605]
A central question in robotics is how to design a control system for an agile mobile robot.
We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting.
Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour.
arXiv Detail & Related papers (2023-10-17T02:40:27Z) - Learning Low-Frequency Motion Control for Robust and Dynamic Robot
Locomotion [10.838285018473725]
We demonstrate robust and dynamic locomotion with a learned motion controller executing at as low as 8 Hz on a real ANYmal C quadruped.
The robot is able to robustly and repeatably achieve a high heading velocity of 1.5 m/s, traverse uneven terrain, and resist unexpected external perturbations.
arXiv Detail & Related papers (2022-09-29T15:55:33Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - Bayesian Optimization Meets Hybrid Zero Dynamics: Safe Parameter
Learning for Bipedal Locomotion Control [17.37169551675587]
We propose a multi-domain control parameter learning framework for locomotion control of bipedal robots.
We leverage BO to learn the control parameters used in the HZD-based controller.
Next, the learning process is applied on the physical robot to learn for corrections to the control parameters learned in simulation.
arXiv Detail & Related papers (2022-03-04T20:48:17Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Reinforcement Learning with Evolutionary Trajectory Generator: A General
Approach for Quadrupedal Locomotion [29.853927354893656]
We propose a novel RL-based approach that contains an evolutionary foot trajectory generator.
The generator continually optimize the shape of the output trajectory for the given task, providing diversified motion priors to guide the policy learning.
We deploy the controller learned in the simulation on a 12-DoF quadrupedal robot, and it can successfully traverse challenging scenarios with efficient gaits.
arXiv Detail & Related papers (2021-09-14T02:51:50Z) - Fast and Efficient Locomotion via Learned Gait Transitions [35.86279693549959]
We focus on the problem of developing efficient controllers for quadrupedal robots.
We devise a hierarchical learning framework, in which distinctive locomotion gaits and natural gait transitions emerge automatically.
We show that the learned hierarchical controller consumes much less energy across a wide range of locomotion speed than baseline controllers.
arXiv Detail & Related papers (2021-04-09T23:53:28Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Efficient Learning of Control Policies for Robust Quadruped Bounding
using Pretrained Neural Networks [15.09037992110481]
Bounding is one of the important gaits in quadrupedal locomotion for negotiating obstacles.
The authors proposed an effective approach that can learn robust bounding gaits more efficiently.
The authors approach shows efficient computing and good locomotion results by the Jueying Mini quadrupedal robot bounding over uneven terrain.
arXiv Detail & Related papers (2020-11-01T08:06:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.