Multi-expert learning of adaptive legged locomotion
- URL: http://arxiv.org/abs/2012.05810v1
- Date: Thu, 10 Dec 2020 16:40:44 GMT
- Title: Multi-expert learning of adaptive legged locomotion
- Authors: Chuanyu Yang, Kai Yuan, Qiuguo Zhu, Wanming Yu, Zhibin Li
- Abstract summary: Multi-Expert Learning Architecture (MELA) learns to generate adaptive skills from a group of representative expert skills.
Using a unified MELA framework, we demonstrated successful multi-skill locomotion on a real quadruped robot.
- Score: 7.418225289645394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving versatile robot locomotion requires motor skills which can adapt to
previously unseen situations. We propose a Multi-Expert Learning Architecture
(MELA) that learns to generate adaptive skills from a group of representative
expert skills. During training, MELA is first initialised by a distinct set of
pre-trained experts, each in a separate deep neural network (DNN). Then by
learning the combination of these DNNs using a Gating Neural Network (GNN),
MELA can acquire more specialised experts and transitional skills across
various locomotion modes. During runtime, MELA constantly blends multiple DNNs
and dynamically synthesises a new DNN to produce adaptive behaviours in
response to changing situations. This approach leverages the advantages of
trained expert skills and the fast online synthesis of adaptive policies to
generate responsive motor skills during the changing tasks. Using a unified
MELA framework, we demonstrated successful multi-skill locomotion on a real
quadruped robot that performed coherent trotting, steering, and fall recovery
autonomously, and showed the merit of multi-expert learning generating
behaviours which can adapt to unseen scenarios.
Related papers
- MoE-Loco: Mixture of Experts for Multitask Locomotion [52.04025933292957]
We present MoE-Loco, a framework for multitask locomotion for legged robots.
Our method enables a single policy to handle diverse terrains, while supporting quadrupedal and bipedal gaits.
arXiv Detail & Related papers (2025-03-11T15:53:54Z) - Learning to Model Diverse Driving Behaviors in Highly Interactive
Autonomous Driving Scenarios with Multi-Agent Reinforcement Learning [0.751422531359304]
Multi-Agent Reinforcement Learning (MARL) has shown impressive results in many driving scenarios.
However, the performance of these trained policies can be impacted when faced with diverse driving styles and personalities.
We introduce the Personality Modeling Network (PeMN), which includes a cooperation value function and personality parameters.
arXiv Detail & Related papers (2024-02-21T02:44:33Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - A Central Motor System Inspired Pre-training Reinforcement Learning for Robotic Control [7.227887302864789]
We propose CMS-PRL, a pre-training reinforcement learning method inspired by the Central Motor System.
First, we introduce a fusion reward mechanism that combines the basic motor reward with mutual information reward.
Second, we design a skill encoding method inspired by the motor program of the basal ganglia, providing rich and continuous skill instructions.
Third, we propose a skill activity function to regulate motor skill activity, enabling the generation of skills with different activity levels.
arXiv Detail & Related papers (2023-11-14T00:49:12Z) - Complex Locomotion Skill Learning via Differentiable Physics [30.868690308658174]
Differentiable physics enables efficient-based optimizations of neural network (NN) controllers.
We present a practical learning framework that outputs unified NN controllers capable of tasks with significantly improved complexity and diversity.
arXiv Detail & Related papers (2022-06-06T04:01:12Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - Generative Adversarial Imitation Learning for End-to-End Autonomous
Driving on Urban Environments [0.8122270502556374]
Generative Adversarial Imitation Learning (GAIL) can train policies without explicitly requiring to define a reward function.
We show that both of them are capable of imitating the expert trajectory from start to end after training ends.
arXiv Detail & Related papers (2021-10-16T15:04:13Z) - MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale [103.7609761511652]
We show how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously.
New tasks can be continuously instantiated from previously learned tasks.
We train and evaluate our system on a set of 12 real-world tasks with data collected from 7 robots.
arXiv Detail & Related papers (2021-04-16T16:38:02Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z) - The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural
Language Understanding [97.85957811603251]
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks.
A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm.
arXiv Detail & Related papers (2020-02-19T03:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.