Autonomous Navigation of Underactuated Bipedal Robots in
Height-Constrained Environments
- URL: http://arxiv.org/abs/2109.05714v4
- Date: Thu, 13 Jul 2023 16:59:31 GMT
- Title: Autonomous Navigation of Underactuated Bipedal Robots in
Height-Constrained Environments
- Authors: Zhongyu Li, Jun Zeng, Shuxiao Chen, Koushil Sreenath
- Abstract summary: This paper presents an end-to-end autonomous navigation framework for bipedal robots.
A vertically-actuated Spring-Loaded Inverted Pendulum (vSLIP) model is introduced to capture the robot's coupled dynamics of planar walking and vertical walking height.
A variable walking height controller is leveraged to enable the bipedal robot to maintain stable periodic walking gaits while following the planned trajectory.
- Score: 20.246040671823554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Navigating a large-scaled robot in unknown and cluttered height-constrained
environments is challenging. Not only is a fast and reliable planning algorithm
required to go around obstacles, the robot should also be able to change its
intrinsic dimension by crouching in order to travel underneath
height-constrained regions. There are few mobile robots that are capable of
handling such a challenge, and bipedal robots provide a solution. However, as
bipedal robots have nonlinear and hybrid dynamics, trajectory planning while
ensuring dynamic feasibility and safety on these robots is challenging. This
paper presents an end-to-end autonomous navigation framework which leverages
three layers of planners and a variable walking height controller to enable
bipedal robots to safely explore height-constrained environments. A
vertically-actuated Spring-Loaded Inverted Pendulum (vSLIP) model is introduced
to capture the robot's coupled dynamics of planar walking and vertical walking
height. This reduced-order model is utilized to optimize for long-term and
short-term safe trajectory plans. A variable walking height controller is
leveraged to enable the bipedal robot to maintain stable periodic walking gaits
while following the planned trajectory. The entire framework is tested and
experimentally validated using a bipedal robot Cassie. This demonstrates
reliable autonomy to drive the robot to safely avoid obstacles while walking to
the goal location in various kinds of height-constrained cluttered
environments.
Related papers
- Learning to enhance multi-legged robot on rugged landscapes [7.956679144631909]
Multi-legged robots offer a promising solution forNavigating rugged landscapes.
Recent studies have shown that a linear controller can ensure reliable mobility on challenging terrains.
We develop a MuJoCo-based simulator tailored to this robotic platform and use the simulation to develop a reinforcement learning-based control framework.
arXiv Detail & Related papers (2024-09-14T15:53:08Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Robust and Versatile Bipedal Jumping Control through Reinforcement
Learning [141.56016556936865]
This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world.
We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions.
We develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history.
arXiv Detail & Related papers (2023-02-19T01:06:09Z) - Hierarchical Reinforcement Learning for Precise Soccer Shooting Skills
using a Quadrupedal Robot [76.04391023228081]
We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning.
We propose a hierarchical framework that leverages deep reinforcement learning to train a robust motion control policy.
We deploy the proposed framework on an A1 quadrupedal robot and enable it to accurately shoot the ball to random targets in the real world.
arXiv Detail & Related papers (2022-08-01T22:34:51Z) - 6N-DoF Pose Tracking for Tensegrity Robots [5.398092221687385]
Tensegrity robots are composed of rigid compressive elements (rods) and flexible tensile elements (e.g., cables)
This work aims to address the pose tracking of tensegrity robots through a markerless, vision-based method.
An iterative optimization process is proposed to estimate the 6-DoF poses of each rigid element of a tensegrity robot from an RGB-D video.
arXiv Detail & Related papers (2022-05-29T20:55:29Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Autonomous Navigation for Quadrupedal Robots with Optimized Jumping
through Constrained Obstacles [3.8651239621657654]
This paper presents an end-to-end navigation framework for quadrupedal robots.
To obtain a dynamic jumping maneuver while avoiding obstacles, dynamically-feasible trajectories are optimized offline.
The framework is experimentally deployed and validated on a quadrupedal robot, a Mini Cheetah.
arXiv Detail & Related papers (2021-07-01T23:40:30Z) - Robust Quadruped Jumping via Deep Reinforcement Learning [10.095966161524043]
In this paper, we consider jumping varying distances and heights for a quadrupedal robot in noisy environments.
We propose a framework using deep reinforcement learning that leverages and augments the complex solution of nonlinear trajectory optimization for quadrupedal jumping.
We demonstrate robustness of foot disturbances of up to 6 cm in height, or 33% of the robot's nominal standing height, while jumping 2x the body length in distance.
arXiv Detail & Related papers (2020-11-13T19:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.