Generating a Terrain-Robustness Benchmark for Legged Locomotion: A
Prototype via Terrain Authoring and Active Learning
- URL: http://arxiv.org/abs/2208.07681v1
- Date: Tue, 16 Aug 2022 11:42:28 GMT
- Title: Generating a Terrain-Robustness Benchmark for Legged Locomotion: A
Prototype via Terrain Authoring and Active Learning
- Authors: Chong Zhang
- Abstract summary: We prototype the generation of a terrain dataset via terrain authoring and active learning.
Hopefully, the generated dataset can make a terrain-robustness benchmark for legged locomotion.
- Score: 6.254631755450703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Terrain-aware locomotion has become an emerging topic in legged robotics.
However, it is hard to generate challenging and realistic terrains in
simulation, which limits the way researchers evaluate their locomotion
policies. In this paper, we prototype the generation of a terrain dataset via
terrain authoring and active learning, and the learned samplers can stably
generate diverse high-quality terrains. Hopefully, the generated dataset can
make a terrain-robustness benchmark for legged locomotion. The dataset and the
code implementation are released at https://bit.ly/3bn4j7f.
Related papers
- GenTe: Generative Real-world Terrains for General Legged Robot Locomotion Control [3.5594486521440323]
GenTe is a framework for generating physically realistic and adaptable terrains to train generalizable locomotion policies.
By leveraging function-calling techniques and reasoning capabilities of Vision-Language Models, GenTe generates complex, contextually relevant terrains.
Experiments demonstrate improved generalization and robustness in bipedal robot locomotion.
arXiv Detail & Related papers (2025-04-14T09:01:44Z) - Watch Your STEPP: Semantic Traversability Estimation using Pose Projected Features [4.392942391043664]
We propose a method for estimating terrain traversability by learning from demonstrations of human walking.
Our approach leverages dense, pixel-wise feature embeddings generated using the DINOv2 vision Transformer model.
By minimizing loss, the network distinguishes between familiar terrain with a low reconstruction error and unfamiliar or hazardous terrain with a higher reconstruction error.
arXiv Detail & Related papers (2025-01-29T11:53:58Z) - Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Learning Humanoid Locomotion over Challenging Terrain [84.35038297708485]
We present a learning-based approach for blind humanoid locomotion capable of traversing challenging natural and man-made terrains.
Our model is first pre-trained on a dataset of flat-ground trajectories with sequence modeling, and then fine-tuned on uneven terrain using reinforcement learning.
We evaluate our model on a real humanoid robot across a variety of terrains, including rough, deformable, and sloped surfaces.
arXiv Detail & Related papers (2024-10-04T17:57:09Z) - BiRoDiff: Diffusion policies for bipedal robot locomotion on unseen terrains [0.9480364746270075]
Locomotion on unknown terrains is essential for bipedal robots to handle novel real-world challenges.
We introduce a lightweight framework that learns a single walking controller that yields locomotion on multiple terrains.
arXiv Detail & Related papers (2024-07-07T16:03:33Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Deep Generative Framework for Interactive 3D Terrain Authoring and
Manipulation [4.202216894379241]
We propose a novel realistic terrain authoring framework powered by a combination of VAE and generative conditional GAN model.
Our framework is an example-based method that attempts to overcome the limitations of existing methods by learning a latent space from a real-world terrain dataset.
We also developed an interactive tool, that lets the user generate diverse terrains with minimalist inputs.
arXiv Detail & Related papers (2022-01-07T08:58:01Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Solving Occlusion in Terrain Mapping with Neural Networks [7.703348666813963]
We introduce a self-supervised learning approach capable of training on real-world data without a need for ground-truth information.
Our neural network is able to run in real-time on both CPU and GPU with suitable sampling rates for autonomous ground robots.
arXiv Detail & Related papers (2021-09-15T08:30:16Z) - Quadruped Locomotion on Non-Rigid Terrain using Reinforcement Learning [10.729374293332281]
We present a novel reinforcement learning framework for learning locomotion on non-rigid dynamic terrains.
A trained robot with 55cm base length can walk on terrain that can sink up to 5cm.
We show the effectiveness of our method by training the robot with various terrain conditions.
arXiv Detail & Related papers (2021-07-07T00:34:23Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.