Natural and Robust Walking using Reinforcement Learning without
Demonstrations in High-Dimensional Musculoskeletal Models
- URL: http://arxiv.org/abs/2309.02976v2
- Date: Thu, 7 Sep 2023 15:23:29 GMT
- Title: Natural and Robust Walking using Reinforcement Learning without
Demonstrations in High-Dimensional Musculoskeletal Models
- Authors: Pierre Schumacher, Thomas Geijtenbeek, Vittorio Caggiano, Vikash
Kumar, Syn Schmitt, Georg Martius, Daniel F. B. Haeufle
- Abstract summary: Humans excel at robust bipedal walking in complex natural environments.
It is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem.
- Score: 29.592874007260342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans excel at robust bipedal walking in complex natural environments. In
each step, they adequately tune the interaction of biomechanical muscle
dynamics and neuronal signals to be robust against uncertainties in ground
conditions. However, it is still not fully understood how the nervous system
resolves the musculoskeletal redundancy to solve the multi-objective control
problem considering stability, robustness, and energy efficiency. In computer
simulations, energy minimization has been shown to be a successful optimization
target, reproducing natural walking with trajectory optimization or
reflex-based control methods. However, these methods focus on particular
motions at a time and the resulting controllers are limited when compensating
for perturbations. In robotics, reinforcement learning~(RL) methods recently
achieved highly stable (and efficient) locomotion on quadruped systems, but the
generation of human-like walking with bipedal biomechanical models has required
extensive use of expert data sets. This strong reliance on demonstrations often
results in brittle policies and limits the application to new behaviors,
especially considering the potential variety of movements for high-dimensional
musculoskeletal models in 3D. Achieving natural locomotion with RL without
sacrificing its incredible robustness might pave the way for a novel approach
to studying human walking in complex natural environments. Videos:
https://sites.google.com/view/naturalwalkingrl
Related papers
- Brain-Body-Task Co-Adaptation can Improve Autonomous Learning and Speed
of Bipedal Walking [0.0]
Inspired by animals that co-adapt their brain and body to interact with the environment, we present a tendon-driven and over-actuated bipedal robot.
We show how continual physical adaptation can be driven by continual physical adaptation rooted in the backdrivable properties of the plant.
arXiv Detail & Related papers (2024-02-04T07:57:52Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - Skeleton2Humanoid: Animating Simulated Characters for
Physically-plausible Motion In-betweening [59.88594294676711]
Modern deep learning based motion synthesis approaches barely consider the physical plausibility of synthesized motions.
We propose a system Skeleton2Humanoid'' which performs physics-oriented motion correction at test time.
Experiments on the challenging LaFAN1 dataset show our system can outperform prior methods significantly in terms of both physical plausibility and accuracy.
arXiv Detail & Related papers (2022-10-09T16:15:34Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Reinforcement Learning Control of a Biomechanical Model of the Upper
Extremity [0.0]
We learn a control policy using a motor babbling approach as implemented in reinforcement learning.
We use a state-of-the-art biomechanical model, which includes seven actuated degrees of freedom.
To deal with the curse of dimensionality, we use a simplified second-order muscle model, acting at each degree of freedom instead of individual muscles.
arXiv Detail & Related papers (2020-11-13T19:49:29Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - Control for Multifunctionality: Bioinspired Control Based on Feeding in
Aplysia californica [0.3277163122167433]
We develop a hybrid Boolean model framework capable of modeling neural bursting activity and simple biomechanics at speeds faster than real time.
We present a multifunctional model of Aplysia californica feeding that qualitatively reproduces three key feeding behaviors.
We demonstrate that the model can be used for formulating testable hypotheses and discuss the implications of this approach for robotic control and neuroscience.
arXiv Detail & Related papers (2020-08-11T19:26:50Z) - Reinforcement Learning of Musculoskeletal Control from Functional
Simulations [3.94716580540538]
In this work, a deep reinforcement learning (DRL) based inverse dynamics controller is trained to control muscle activations of a biomechanical model of the human shoulder.
Results are presented for a single-axis motion control of shoulder abduction for the task of following randomly generated angular trajectories.
arXiv Detail & Related papers (2020-07-13T20:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.