VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale
Driving Scene
- URL: http://arxiv.org/abs/2304.09807v2
- Date: Sun, 27 Aug 2023 13:58:18 GMT
- Title: VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale
Driving Scene
- Authors: Shaoyu Chen, Yunchi Zhang, Bencheng Liao, Jiafeng Xie, Tianheng Cheng,
Wei Sui, Qian Zhang, Chang Huang, Wenyu Liu, Xinggang Wang
- Abstract summary: We build up a systematic vectorized map annotation framework (termed VMA) for efficiently generating HD map of large-scale driving scene.
VMA is highly efficient and requires negligible human effort, and flexible in terms of spatial scale and element type.
On average VMA takes 160min for annotating a scene with a range of hundreds of meters, and reduces 52.3% of the human cost.
- Score: 41.110429729268496
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: High-definition (HD) map serves as the essential infrastructure of autonomous
driving. In this work, we build up a systematic vectorized map annotation
framework (termed VMA) for efficiently generating HD map of large-scale driving
scene. We design a divide-and-conquer annotation scheme to solve the spatial
extensibility problem of HD map generation, and abstract map elements with a
variety of geometric patterns as unified point sequence representation, which
can be extended to most map elements in the driving scene. VMA is highly
efficient and extensible, requiring negligible human effort, and flexible in
terms of spatial scale and element type. We quantitatively and qualitatively
validate the annotation performance on real-world urban and highway scenes, as
well as NYC Planimetric Database. VMA can significantly improve map generation
efficiency and require little human effort. On average VMA takes 160min for
annotating a scene with a range of hundreds of meters, and reduces 52.3% of the
human cost, showing great application value. Code:
https://github.com/hustvl/VMA.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - DeepAerialMapper: Deep Learning-based Semi-automatic HD Map Creation for Highly Automated Vehicles [0.0]
We introduce a semi-automatic method for creating HD maps from high-resolution aerial imagery.
Our method involves training neural networks to semantically segment aerial images into classes relevant to HD maps.
Exporting the map to the Lanelet2 format allows easy extension for different use cases.
arXiv Detail & Related papers (2024-10-01T15:05:05Z) - Enhancing Online Road Network Perception and Reasoning with Standard Definition Maps [14.535963852751635]
We focus on leveraging lightweight and scalable priors-Standard Definition (SD) maps-in the development of online vectorized HD map representations.
A key finding is that SD map encoders are model agnostic and can be quickly adapted to new architectures that utilize bird's eye view (BEV) encoders.
Our results show that making use of SD maps as priors for the online mapping task can significantly speed up convergence and boost the performance of the online centerline perception task by 30% (mAP)
arXiv Detail & Related papers (2024-08-01T19:39:55Z) - ScalableMap: Scalable Map Learning for Online Long-Range Vectorized HD
Map Construction [42.874195888422584]
We propose a novel end-to-end pipeline for online long-range vectorized high-definition (HD) map construction using on-board camera sensors.
We exploit the properties of map elements to improve the performance of map construction.
arXiv Detail & Related papers (2023-10-20T09:46:24Z) - Online Map Vectorization for Autonomous Driving: A Rasterization
Perspective [58.71769343511168]
We introduce a newization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios.
We also propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiableization to preciseized outputs and then performs geometry-aware supervision on HD maps.
arXiv Detail & Related papers (2023-06-18T08:51:14Z) - MV-Map: Offboard HD-Map Generation with Multi-view Consistency [29.797769409113105]
Bird's-eye-view (BEV) perception models can be useful for building high-definition maps (HD-Maps) with less human labor.
Their results are often unreliable and demonstrate noticeable inconsistencies in the predicted HD-Maps from different viewpoints.
This paper advocates a more practical 'offboard' HD-Map generation setup that removes the computation constraints.
arXiv Detail & Related papers (2023-05-15T17:59:15Z) - HDMapGen: A Hierarchical Graph Generative Model of High Definition Maps [81.86923212296863]
HD maps are maps with precise definitions of road lanes with rich semantics of the traffic rules.
There are only a small amount of real-world road topologies and geometries, which significantly limits our ability to test out the self-driving stack.
We propose HDMapGen, a hierarchical graph generation model capable of producing high-quality and diverse HD maps.
arXiv Detail & Related papers (2021-06-28T17:59:30Z) - MP3: A Unified Model to Map, Perceive, Predict and Plan [84.07678019017644]
MP3 is an end-to-end approach to mapless driving where the input is raw sensor data and a high-level command.
We show that our approach is significantly safer, more comfortable, and can follow commands better than the baselines in challenging long-term closed-loop simulations.
arXiv Detail & Related papers (2021-01-18T00:09:30Z) - HDNET: Exploiting HD Maps for 3D Object Detection [99.49035895393934]
We show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors.
We design a single stage detector that extracts geometric and semantic features from the HD maps.
As maps might not be available everywhere, we also propose a map prediction module that estimates the map on the fly from raw LiDAR data.
arXiv Detail & Related papers (2020-12-21T21:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.