Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation
- URL: http://arxiv.org/abs/2408.01640v1
- Date: Sat, 3 Aug 2024 02:57:37 GMT
- Title: Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation
- Authors: Balázs Opra, Betty Le Dem, Jeffrey M. Walls, Dimitar Lukarski, Cyrill Stachniss,
- Abstract summary: This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
- Score: 18.236615392921273
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Maps are essential for diverse applications, such as vehicle navigation and autonomous robotics. Both require spatial models for effective route planning and localization. This paper addresses the challenge of road graph construction for autonomous vehicles. Despite recent advances, creating a road graph remains labor-intensive and has yet to achieve full automation. The goal of this paper is to generate such graphs automatically and accurately. Modern cars are equipped with onboard sensors used for today's advanced driver assistance systems like lane keeping. We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles to estimate road-level maps with minimal effort. We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network. We also utilize the data's time series nature to refine the neural network's output by using map matching. We implemented and evaluated our method using a fleet of real consumer vehicles, only using the deployed onboard sensors. Our evaluation demonstrates that our approach not only matches existing methods on simpler road configurations but also significantly outperforms them on more complex road geometries and topologies. This work received the 2023 Woven by Toyota Invention Award.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - G-MEMP: Gaze-Enhanced Multimodal Ego-Motion Prediction in Driving [71.9040410238973]
We focus on inferring the ego trajectory of a driver's vehicle using their gaze data.
Next, we develop G-MEMP, a novel multimodal ego-trajectory prediction network that combines GPS and video input with gaze data.
The results show that G-MEMP significantly outperforms state-of-the-art methods in both benchmarks.
arXiv Detail & Related papers (2023-12-13T23:06:30Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - Haul Road Mapping from GPS Traces [0.0]
This paper investigates the possibility of automatically deriving an accurate representation of the road network using GPS data available from haul trucks operating on site.
Based on shortcomings seen in all tested algorithms, a post-processing step is developed which geometrically analyses the created road map for artefacts typical of free-drive areas on mine sites.
arXiv Detail & Related papers (2022-06-27T04:35:06Z) - Exploring Map-based Features for Efficient Attention-based Vehicle
Motion Prediction [3.222802562733787]
Motion prediction of multiple agents is a crucial task in arbitrarily complex environments.
We show how to achieve competitive performance on the Argoverse 1.0 Benchmark using efficient attention-based models.
arXiv Detail & Related papers (2022-05-25T22:38:11Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach [0.0]
In this paper, we address the problem of localization with automotive-grade radars.
The system uses landmarks and odometry information as an abstraction layer.
A single, semantic landmark map is used and maintained for all sensors.
arXiv Detail & Related papers (2021-04-29T07:35:20Z) - Fusion of neural networks, for LIDAR-based evidential road mapping [3.065376455397363]
We introduce RoadSeg, a new convolutional architecture that is optimized for road detection in LIDAR scans.
RoadSeg is used to classify individual LIDAR points as either belonging to the road, or not.
We thus secondly present an evidential road mapping algorithm, that fuses consecutive road detection results.
arXiv Detail & Related papers (2021-02-05T18:14:36Z) - Autonomous Navigation through intersections with Graph
ConvolutionalNetworks and Conditional Imitation Learning for Self-driving
Cars [10.080958939027363]
In autonomous driving, navigation through unsignaled intersections is a challenging task.
We propose a novel branched network G-CIL for the navigation policy learning.
Our end-to-end trainable neural network outperforms the baselines with higher success rate and shorter navigation time.
arXiv Detail & Related papers (2021-02-01T07:33:12Z) - Radar-based Dynamic Occupancy Grid Mapping and Object Detection [55.74894405714851]
In recent years, the classical occupancy grid map approach has been extended to dynamic occupancy grid maps.
This paper presents the further development of a previous approach.
The data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied.
arXiv Detail & Related papers (2020-08-09T09:26:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.