Coordinating Spinal and Limb Dynamics for Enhanced Sprawling Robot Mobility
- URL: http://arxiv.org/abs/2504.14103v1
- Date: Fri, 18 Apr 2025 23:08:48 GMT
- Title: Coordinating Spinal and Limb Dynamics for Enhanced Sprawling Robot Mobility
- Authors: Merve Atasever, Ali Okhovat, Azhang Nazaripouya, John Nisbet, Omer Kurkutlu, Jyotirmoy V. Deshmukh, Yasemin Ozkan Aydin,
- Abstract summary: A flexible spine enables undulation of the body through a wavelike motion along the spine, aiding navigation over uneven terrains and obstacles.<n>Environmental uncertainties, such as surface irregularities and variations in friction, can significantly disrupt body-limb coordination.<n>Deep reinforcement learning offers a promising framework for handling non-deterministic environments.<n>We comparatively examine learning-based control strategies and biologically inspired gait design methods on a salamander-like robot.
- Score: 0.047116288835793156
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Among vertebrates, salamanders, with their unique ability to transition between walking and swimming gaits, highlight the role of spinal mobility in locomotion. A flexible spine enables undulation of the body through a wavelike motion along the spine, aiding navigation over uneven terrains and obstacles. Yet environmental uncertainties, such as surface irregularities and variations in friction, can significantly disrupt body-limb coordination and cause discrepancies between predictions from mathematical models and real-world outcomes. Addressing this challenge requires the development of sophisticated control strategies capable of dynamically adapting to uncertain conditions while maintaining efficient locomotion. Deep reinforcement learning (DRL) offers a promising framework for handling non-deterministic environments and enabling robotic systems to adapt effectively and perform robustly under challenging conditions. In this study, we comparatively examine learning-based control strategies and biologically inspired gait design methods on a salamander-like robot.
Related papers
- StyleLoco: Generative Adversarial Distillation for Natural Humanoid Robot Locomotion [31.30409161905949]
StyleLoco is a novel framework for learning humanoid locomotion.<n>It combines the agility of reinforcement learning with the natural fluidity of human-like movements.<n>We demonstrate that StyleLoco enables humanoid robots to perform diverse locomotion tasks.
arXiv Detail & Related papers (2025-03-19T10:27:44Z) - Humanoid Whole-Body Locomotion on Narrow Terrain via Dynamic Balance and Reinforcement Learning [54.26816599309778]
We propose a novel whole-body locomotion algorithm based on dynamic balance and Reinforcement Learning (RL)<n> Specifically, we introduce a dynamic balance mechanism by leveraging an extended measure of Zero-Moment Point (ZMP)-driven rewards and task-driven rewards in a whole-body actor-critic framework.<n> Experiments conducted on a full-sized Unitree H1-2 robot verify the ability of our method to maintain balance on extremely narrow terrains.
arXiv Detail & Related papers (2025-02-24T14:53:45Z) - Brain-Body-Task Co-Adaptation can Improve Autonomous Learning and Speed
of Bipedal Walking [0.0]
Inspired by animals that co-adapt their brain and body to interact with the environment, we present a tendon-driven and over-actuated bipedal robot.
We show how continual physical adaptation can be driven by continual physical adaptation rooted in the backdrivable properties of the plant.
arXiv Detail & Related papers (2024-02-04T07:57:52Z) - Natural and Robust Walking using Reinforcement Learning without
Demonstrations in High-Dimensional Musculoskeletal Models [29.592874007260342]
Humans excel at robust bipedal walking in complex natural environments.
It is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem.
arXiv Detail & Related papers (2023-09-06T13:20:31Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - Next Steps: Learning a Disentangled Gait Representation for Versatile
Quadruped Locomotion [69.87112582900363]
Current planners are unable to vary key gait parameters continuously while the robot is in motion.
In this work we address this limitation by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, foot step height and full stance duration.
arXiv Detail & Related papers (2021-12-09T10:02:02Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.