Exciting Action: Investigating Efficient Exploration for Learning Musculoskeletal Humanoid Locomotion
- URL: http://arxiv.org/abs/2407.11658v1
- Date: Tue, 16 Jul 2024 12:27:55 GMT
- Title: Exciting Action: Investigating Efficient Exploration for Learning Musculoskeletal Humanoid Locomotion
- Authors: Henri-Jacques Geiß, Firas Al-Hafez, Andre Seyfarth, Jan Peters, Davide Tateo,
- Abstract summary: We demonstrate that adversarial imitation learning can address this issue by analyzing key problems and providing solutions.
We validate our methodology by learning walking and running gaits on a simulated humanoid model with 16 degrees of freedom and 92 Muscle-Tendon Units.
- Score: 16.63152794060493
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning a locomotion controller for a musculoskeletal system is challenging due to over-actuation and high-dimensional action space. While many reinforcement learning methods attempt to address this issue, they often struggle to learn human-like gaits because of the complexity involved in engineering an effective reward function. In this paper, we demonstrate that adversarial imitation learning can address this issue by analyzing key problems and providing solutions using both current literature and novel techniques. We validate our methodology by learning walking and running gaits on a simulated humanoid model with 16 degrees of freedom and 92 Muscle-Tendon Units, achieving natural-looking gaits with only a few demonstrations.
Related papers
- Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - Infer and Adapt: Bipedal Locomotion Reward Learning from Demonstrations
via Inverse Reinforcement Learning [5.246548532908499]
This paper brings state-of-the-art Inverse Reinforcement Learning (IRL) techniques to solving bipedal locomotion problems over complex terrains.
We propose algorithms for learning expert reward functions, and we subsequently analyze the learned functions.
We empirically demonstrate that training a bipedal locomotion policy with the inferred reward functions enhances its walking performance on unseen terrains.
arXiv Detail & Related papers (2023-09-28T00:11:06Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Learning Agile Skills via Adversarial Imitation of Rough Partial
Demonstrations [19.257876507104868]
Learning agile skills is one of the main challenges in robotics.
We propose a generative adversarial method for inferring reward functions from partial and potentially physically incompatible demonstrations.
We show that by using a Wasserstein GAN formulation and transitions from demonstrations with rough and partial information as input, we are able to extract policies that are robust and capable of imitating demonstrated behaviors.
arXiv Detail & Related papers (2022-06-23T13:34:11Z) - DEP-RL: Embodied Exploration for Reinforcement Learning in Overactuated
and Musculoskeletal Systems [14.295720603503806]
Reinforcement learning on large musculoskeletal models has not been able to show similar performance.
We conjecture that ineffective exploration in large overactuated action spaces is a key problem.
By integrating DEP into RL, we achieve fast learning of reaching and locomotion in musculoskeletal systems.
arXiv Detail & Related papers (2022-05-30T15:52:54Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - ALLSTEPS: Curriculum-driven Learning of Stepping Stone Skills [8.406171678292964]
Finding good solutions to stepping-stone locomotion is a longstanding and fundamental challenge for animation and robotics.
We present fully learned solutions to this difficult problem using reinforcement learning.
Results are presented for a simulated human character, a realistic bipedal robot simulation and a monster character, in each case producing robust, plausible motions.
arXiv Detail & Related papers (2020-05-09T00:16:38Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z) - State-Only Imitation Learning for Dexterous Manipulation [63.03621861920732]
In this paper, we explore state-only imitation learning.
We train an inverse dynamics model and use it to predict actions for state-only demonstrations.
Our method performs on par with state-action approaches and considerably outperforms RL alone.
arXiv Detail & Related papers (2020-04-07T17:57:20Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.