Decentralized Deep Reinforcement Learning for a Distributed and Adaptive
Locomotion Controller of a Hexapod Robot
- URL: http://arxiv.org/abs/2005.11164v1
- Date: Thu, 21 May 2020 11:40:37 GMT
- Title: Decentralized Deep Reinforcement Learning for a Distributed and Adaptive
Locomotion Controller of a Hexapod Robot
- Authors: Malte Schilling, Kai Konen, Frank W. Ohl, Timo Korthals
- Abstract summary: We propose a decentralized organization as found in insect motor control for coordination of different legs.
A concurrent local structure is able to learn better walking behavior.
- Score: 0.6193838300896449
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Locomotion is a prime example for adaptive behavior in animals and biological
control principles have inspired control architectures for legged robots. While
machine learning has been successfully applied to many tasks in recent years,
Deep Reinforcement Learning approaches still appear to struggle when applied to
real world robots in continuous control tasks and in particular do not appear
as robust solutions that can handle uncertainties well. Therefore, there is a
new interest in incorporating biological principles into such learning
architectures. While inducing a hierarchical organization as found in motor
control has shown already some success, we here propose a decentralized
organization as found in insect motor control for coordination of different
legs. A decentralized and distributed architecture is introduced on a simulated
hexapod robot and the details of the controller are learned through Deep
Reinforcement Learning. We first show that such a concurrent local structure is
able to learn better walking behavior. Secondly, that the simpler organization
is learned faster compared to holistic approaches.
Related papers
- One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion [18.556470359899855]
We introduce URMA, the Unified Robot Morphology Architecture.
Our framework brings the end-to-end Multi-Task Reinforcement Learning approach to the realm of legged robots.
We show that URMA can learn a locomotion policy on multiple embodiments that can be easily transferred to unseen robot platforms.
arXiv Detail & Related papers (2024-09-10T09:44:15Z) - Hierarchical learning control for autonomous robots inspired by central nervous system [7.227887302864789]
We propose a novel hierarchical learning control framework by mimicking the hierarchical structure of the central nervous system.
The framework combines the active and passive control systems to improve both the flexibility and reliability of the control system.
This study reveals the principle that governs the autonomous behavior in the central nervous system and demonstrates the effectiveness of the hierarchical control approach.
arXiv Detail & Related papers (2024-08-07T03:24:59Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Decentralized Motor Skill Learning for Complex Robotic Systems [5.669790037378093]
We propose a Decentralized motor skill (DEMOS) learning algorithm to automatically discover motor groups that can be decoupled from each other.
Our method improves the robustness and generalization of the policy without sacrificing performance.
Experiments on quadruped and humanoid robots demonstrate that the learned policy is robust against local motor malfunctions and can be transferred to new tasks.
arXiv Detail & Related papers (2023-06-30T05:55:34Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Hierarchical Decentralized Deep Reinforcement Learning Architecture for
a Simulated Four-Legged Agent [0.0]
In nature, control of movement happens in a hierarchical and decentralized fashion.
We present a novel decentral, hierarchical architecture to control a simulated legged agent.
arXiv Detail & Related papers (2022-09-21T07:55:33Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Versatile modular neural locomotion control with fast learning [6.85316573653194]
Legged robots have significant potential to operate in highly unstructured environments.
Currently, controllers must be either manually designed for specific robots or automatically designed via machine learning methods.
We propose a simple yet versatile modular neural control structure with fast learning.
arXiv Detail & Related papers (2021-07-16T12:12:28Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Thinking While Moving: Deep Reinforcement Learning with Concurrent
Control [122.49572467292293]
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system.
Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed.
arXiv Detail & Related papers (2020-04-13T17:49:29Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.