Decentralized Motor Skill Learning for Complex Robotic Systems
- URL: http://arxiv.org/abs/2306.17411v1
- Date: Fri, 30 Jun 2023 05:55:34 GMT
- Title: Decentralized Motor Skill Learning for Complex Robotic Systems
- Authors: Yanjiang Guo, Zheyuan Jiang, Yen-Jen Wang, Jingyue Gao, Jianyu Chen
- Abstract summary: We propose a Decentralized motor skill (DEMOS) learning algorithm to automatically discover motor groups that can be decoupled from each other.
Our method improves the robustness and generalization of the policy without sacrificing performance.
Experiments on quadruped and humanoid robots demonstrate that the learned policy is robust against local motor malfunctions and can be transferred to new tasks.
- Score: 5.669790037378093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) has achieved remarkable success in complex
robotic systems (eg. quadruped locomotion). In previous works, the RL-based
controller was typically implemented as a single neural network with
concatenated observation input. However, the corresponding learned policy is
highly task-specific. Since all motors are controlled in a centralized way,
out-of-distribution local observations can impact global motors through the
single coupled neural network policy. In contrast, animals and humans can
control their limbs separately. Inspired by this biological phenomenon, we
propose a Decentralized motor skill (DEMOS) learning algorithm to automatically
discover motor groups that can be decoupled from each other while preserving
essential connections and then learn a decentralized motor control policy. Our
method improves the robustness and generalization of the policy without
sacrificing performance. Experiments on quadruped and humanoid robots
demonstrate that the learned policy is robust against local motor malfunctions
and can be transferred to new tasks.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Low-Rank Modular Reinforcement Learning via Muscle Synergy [25.120547719120765]
Modular Reinforcement Learning (RL) decentralizes the control of multi-joint robots by learning policies for each actuator.
We propose a Synergy-Oriented LeARning (SOLAR) framework that exploits the redundant nature of DoF in robot control.
arXiv Detail & Related papers (2022-10-26T16:01:31Z) - Human-AI Shared Control via Frequency-based Policy Dissection [34.0399894373716]
Human-AI shared control allows human to interact and collaborate with AI to accomplish control tasks in complex environments.
Previous Reinforcement Learning (RL) methods attempt the goal-conditioned design to achieve human-controllable policies.
We develop a simple yet effective frequency-based approach called textitPolicy Dissection to align the intermediate representation of the learned neural controller with the kinematic attributes of the agent behavior.
arXiv Detail & Related papers (2022-05-31T23:57:55Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Towards General and Autonomous Learning of Core Skills: A Case Study in
Locomotion [19.285099263193622]
We develop a learning framework that can learn sophisticated locomotion behavior for a wide spectrum of legged robots.
Our learning framework relies on a data-efficient, off-policy multi-task RL algorithm and a small set of reward functions that are semantically identical across robots.
For nine different types of robots, including a real-world quadruped robot, we demonstrate that the same algorithm can rapidly learn diverse and reusable locomotion skills.
arXiv Detail & Related papers (2020-08-06T08:23:55Z) - AirCapRL: Autonomous Aerial Human Motion Capture using Deep
Reinforcement Learning [38.429105809093116]
We introduce a deep reinforcement learning (RL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap)
We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape a single moving person using multiple aerial vehicles.
arXiv Detail & Related papers (2020-07-13T12:30:31Z) - Decentralized Deep Reinforcement Learning for a Distributed and Adaptive
Locomotion Controller of a Hexapod Robot [0.6193838300896449]
We propose a decentralized organization as found in insect motor control for coordination of different legs.
A concurrent local structure is able to learn better walking behavior.
arXiv Detail & Related papers (2020-05-21T11:40:37Z) - Learning to Walk in the Real World with Minimal Human Effort [80.7342153519654]
We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
arXiv Detail & Related papers (2020-02-20T03:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.