MoE-Loco: Mixture of Experts for Multitask Locomotion
- URL: http://arxiv.org/abs/2503.08564v1
- Date: Tue, 11 Mar 2025 15:53:54 GMT
- Title: MoE-Loco: Mixture of Experts for Multitask Locomotion
- Authors: Runhan Huang, Shaoting Zhu, Yilun Du, Hang Zhao,
- Abstract summary: We present MoE-Loco, a framework for multitask locomotion for legged robots.<n>Our method enables a single policy to handle diverse terrains, while supporting quadrupedal and bipedal gaits.
- Score: 52.04025933292957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present MoE-Loco, a Mixture of Experts (MoE) framework for multitask locomotion for legged robots. Our method enables a single policy to handle diverse terrains, including bars, pits, stairs, slopes, and baffles, while supporting quadrupedal and bipedal gaits. Using MoE, we mitigate the gradient conflicts that typically arise in multitask reinforcement learning, improving both training efficiency and performance. Our experiments demonstrate that different experts naturally specialize in distinct locomotion behaviors, which can be leveraged for task migration and skill composition. We further validate our approach in both simulation and real-world deployment, showcasing its robustness and adaptability.
Related papers
- StyleLoco: Generative Adversarial Distillation for Natural Humanoid Robot Locomotion [31.30409161905949]
StyleLoco is a novel framework for learning humanoid locomotion.
It combines the agility of reinforcement learning with the natural fluidity of human-like movements.
We demonstrate that StyleLoco enables humanoid robots to perform diverse locomotion tasks.
arXiv Detail & Related papers (2025-03-19T10:27:44Z) - Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learning [59.001091197106085]
Multi-Task Learning (MTL) for Vision Transformer aims at enhancing the model capability by tackling multiple tasks simultaneously.<n>Most recent works have predominantly focused on designing Mixture-of-Experts (MoE) structures and in tegrating Low-Rank Adaptation (LoRA) to efficiently perform multi-task learning.<n>We propose a novel approach dubbed Efficient Multi-Task Learning (EMTAL) by transforming a pre-trained Vision Transformer into an efficient multi-task learner.
arXiv Detail & Related papers (2025-01-12T17:41:23Z) - Offline Adaptation of Quadruped Locomotion using Diffusion Models [59.882275766745295]
We present a diffusion-based approach to quadrupedal locomotion that simultaneously addresses the limitations of learning and interpolating between multiple skills.
We show that these capabilities are compatible with a multi-skill policy and can be applied with little modification and minimal compute overhead.
We verify the validity of our approach with hardware experiments on the ANYmal quadruped platform.
arXiv Detail & Related papers (2024-11-13T18:12:15Z) - HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation [7.01404330241523]
HYPERmotion is a framework that learns, selects and plans behaviors based on tasks in different scenarios.
We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints.
Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks.
arXiv Detail & Related papers (2024-06-20T18:21:24Z) - Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning [50.73666458313015]
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications.
MoE has been emerged as a promising solution with its sparse architecture for effective task decoupling.
Intuition-MoR1E achieves superior efficiency and 2.15% overall accuracy improvement across 14 public datasets.
arXiv Detail & Related papers (2024-04-13T12:14:58Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Robust and Versatile Bipedal Jumping Control through Reinforcement
Learning [141.56016556936865]
This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world.
We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions.
We develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history.
arXiv Detail & Related papers (2023-02-19T01:06:09Z) - Walk These Ways: Tuning Robot Control for Generalization with
Multiplicity of Behavior [12.91132798749]
We learn a single policy that encodes a structured family of locomotion strategies that solve training tasks in different ways.
Different strategies generalize differently and can be chosen in real-time for new tasks or environments, bypassing the need for time-consuming retraining.
We release a fast, robust open-source MoB locomotion controller, Walk These Ways, that can execute diverse gaits with variable footswing, posture, and speed.
arXiv Detail & Related papers (2022-12-06T18:59:34Z) - Multi-expert learning of adaptive legged locomotion [7.418225289645394]
Multi-Expert Learning Architecture (MELA) learns to generate adaptive skills from a group of representative expert skills.
Using a unified MELA framework, we demonstrated successful multi-skill locomotion on a real quadruped robot.
arXiv Detail & Related papers (2020-12-10T16:40:44Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.