No More Blind Spots: Learning Vision-Based Omnidirectional Bipedal Locomotion for Challenging Terrain
- URL: http://arxiv.org/abs/2508.11929v1
- Date: Sat, 16 Aug 2025 06:20:46 GMT
- Title: No More Blind Spots: Learning Vision-Based Omnidirectional Bipedal Locomotion for Challenging Terrain
- Authors: Mohitvishnu S. Gadde, Pranay Dugar, Ashish Malik, Alan Fern,
- Abstract summary: We present a learning framework for vision-based omnidirectional bipedal locomotion.<n>A key challenge is the high computational cost of rendering omnidirectional depth images in simulation.<n>Our method combines a robust blind controller with a teacher policy that supervises a vision-based student policy.
- Score: 12.51464645002418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective bipedal locomotion in dynamic environments, such as cluttered indoor spaces or uneven terrain, requires agile and adaptive movement in all directions. This necessitates omnidirectional terrain sensing and a controller capable of processing such input. We present a learning framework for vision-based omnidirectional bipedal locomotion, enabling seamless movement using depth images. A key challenge is the high computational cost of rendering omnidirectional depth images in simulation, making traditional sim-to-real reinforcement learning (RL) impractical. Our method combines a robust blind controller with a teacher policy that supervises a vision-based student policy, trained on noise-augmented terrain data to avoid rendering costs during RL and ensure robustness. We also introduce a data augmentation technique for supervised student training, accelerating training by up to 10 times compared to conventional methods. Our framework is validated through simulation and real-world tests, demonstrating effective omnidirectional locomotion with minimal reliance on expensive rendering. This is, to the best of our knowledge, the first demonstration of vision-based omnidirectional bipedal locomotion, showcasing its adaptability to diverse terrains.
Related papers
- Open-World Drone Active Tracking with Goal-Centered Rewards [62.21394499788672]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.<n>We propose DAT, the first open-world drone active air-to-ground tracking benchmark.<n>We also propose GC-VAT, which aims to improve the performance of drone tracking targets in complex scenarios.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Learning Perception-Aware Agile Flight in Cluttered Environments [38.59659342532348]
We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in cluttered environments.
Our approach tightly couples perception and control, showing a significant advantage in computation speed (10x faster) and success rate.
We demonstrate the closed-loop control performance using a physical quadrotor and hardware-in-the-loop simulation at speeds up to 50km/h.
arXiv Detail & Related papers (2022-10-04T18:18:58Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Learning to Jump from Pixels [23.17535989519855]
We present Depth-based Impulse Control (DIC), a method for synthesizing highly agile visually-guided behaviors.
DIC affords the flexibility of model-free learning but regularizes behavior through explicit model-based optimization of ground reaction forces.
We evaluate the proposed method both in simulation and in the real world.
arXiv Detail & Related papers (2021-10-28T17:53:06Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Learning Vision-Guided Quadrupedal Locomotion End-to-End with
Cross-Modal Transformers [14.509254362627576]
We propose to address quadrupedal locomotion tasks using Reinforcement Learning (RL)
We introduce LocoTransformer, an end-to-end RL method for quadrupedal locomotion.
arXiv Detail & Related papers (2021-07-08T17:41:55Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.