RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving
- URL: http://arxiv.org/abs/2310.02262v1
- Date: Tue, 3 Oct 2023 17:59:32 GMT
- Title: RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving
- Authors: Tong Zhao, Chenfeng Xu, Mingyu Ding, Masayoshi Tomizuka, Wei Zhan,
Yintao Wei
- Abstract summary: Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
- Score: 67.09546127265034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the growing demands for safety and comfort in
intelligent robot systems, particularly autonomous vehicles, where road
conditions play a pivotal role in overall driving performance. For example,
reconstructing road surfaces helps to enhance the analysis and prediction of
vehicle responses for motion planning and control systems. We introduce the
Road Surface Reconstruction Dataset (RSRD), a real-world, high-resolution, and
high-precision dataset collected with a specialized platform in diverse driving
conditions. It covers common road types containing approximately 16,000 pairs
of stereo images, original point clouds, and ground-truth depth/disparity maps,
with accurate post-processing pipelines to ensure its quality. Based on RSRD,
we further build a comprehensive benchmark for recovering road profiles through
depth estimation and stereo matching. Preliminary evaluations with various
state-of-the-art methods reveal the effectiveness of our dataset and the
challenge of the task, underscoring substantial opportunities of RSRD as a
valuable resource for advancing techniques, e.g., multi-view stereo towards
safe autonomous driving. The dataset and demo videos are available at
https://thu-rsxd.com/rsrd/
Related papers
- ROAD-Waymo: Action Awareness at Scale for Autonomous Driving [17.531603453254434]
ROAD-Waymo is an extensive dataset for the development and benchmarking of techniques for agent, action, location and event detection in road scenes.
Considerably larger and more challenging than any existing dataset (and encompassing multiple cities), it comes with 198k annotated video frames, 54k agent tubes, 3.9M bounding boxes and a total of 12.4M labels.
arXiv Detail & Related papers (2024-11-03T20:46:50Z) - RoadBEV: Road Surface Reconstruction in Bird's Eye View [55.0558717607946]
Road surface conditions, especially geometry profiles, enormously affect driving performance of autonomous vehicles. Vision-based online road reconstruction promisingly captures road information in advance.
Bird's-Eye-View (BEV) perception provides immense potential to more reliable and accurate reconstruction.
This paper uniformly proposes two simple yet effective models for road elevation reconstruction in BEV named RoadBEV-mono and RoadBEV-stereo.
arXiv Detail & Related papers (2024-04-09T20:24:29Z) - RoadRunner -- Learning Traversability Estimation for Autonomous Off-road Driving [13.101416329887755]
We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
arXiv Detail & Related papers (2024-02-29T16:47:54Z) - Deep Perspective Transformation Based Vehicle Localization on Bird's Eye
View [0.49747156441456597]
Traditional approaches rely on installing multiple sensors to simulate the environment.
We propose an alternative solution by generating a top-down representation of the scene.
We present an architecture that transforms perspective view RGB images into bird's-eye-view maps with segmented surrounding vehicles.
arXiv Detail & Related papers (2023-11-12T10:16:42Z) - RoMe: Towards Large Scale Road Surface Reconstruction via Mesh Representation [9.622389166012741]
RoMe is a novel framework designed for the robust reconstruction of large-scale road surfaces.
Our evaluations underscore RoMe's superiority in terms of speed, accuracy, and robustness.
RoMe's capability extends beyond mere reconstruction, offering significant value for autolabeling tasks in autonomous driving applications.
arXiv Detail & Related papers (2023-06-20T08:16:25Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - ROAD: The ROad event Awareness Dataset for Autonomous Driving [16.24547478826027]
ROAD is designed to test an autonomous vehicle's ability to detect road events.
It comprises 22 videos, annotated with bounding boxes showing the location in the image plane of each road event.
We also provide as baseline a new incremental algorithm for online road event awareness, based on RetinaNet along time.
arXiv Detail & Related papers (2021-02-23T09:48:56Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - DMD: A Large-Scale Multi-Modal Driver Monitoring Dataset for Attention
and Alertness Analysis [54.198237164152786]
Vision is the richest and most cost-effective technology for Driver Monitoring Systems (DMS)
The lack of sufficiently large and comprehensive datasets is currently a bottleneck for the progress of DMS development.
In this paper, we introduce the Driver Monitoring dataset (DMD), an extensive dataset which includes real and simulated driving scenarios.
arXiv Detail & Related papers (2020-08-27T12:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.