MapTR: Structured Modeling and Learning for Online Vectorized HD Map
Construction
- URL: http://arxiv.org/abs/2208.14437v1
- Date: Tue, 30 Aug 2022 17:55:59 GMT
- Title: MapTR: Structured Modeling and Learning for Online Vectorized HD Map
Construction
- Authors: Bencheng Liao, Shaoyu Chen, Xinggang Wang, Tianheng Cheng, Qian Zhang,
Wenyu Liu, Chang Huang
- Abstract summary: MapTR is a structured end-to-end framework for efficient online vectorized HD map construction.
MapTR achieves the best performance and efficiency among existing vectorized map construction approaches.
- Score: 33.30177029735497
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present MapTR, a structured end-to-end framework for efficient online
vectorized HD map construction. We propose a unified permutation-based modeling
approach, i.e., modeling map element as a point set with a group of equivalent
permutations, which avoids the definition ambiguity of map element and eases
learning. We adopt a hierarchical query embedding scheme to flexibly encode
structured map information and perform hierarchical bipartite matching for map
element learning. MapTR achieves the best performance and efficiency among
existing vectorized map construction approaches on nuScenes dataset. In
particular, MapTR-nano runs at real-time inference speed ($25.1$ FPS) on RTX
3090, $8\times$ faster than the existing state-of-the-art camera-based method
while achieving $3.3$ higher mAP. MapTR-tiny significantly outperforms the
existing state-of-the-art multi-modality method by $13.5$ mAP while being
faster. Qualitative results show that MapTR maintains stable and robust map
construction quality in complex and various driving scenes. Abundant demos are
available at \url{https://github.com/hustvl/MapTR} to prove the effectiveness
in real-world scenarios. MapTR is of great application value in autonomous
driving. Code will be released for facilitating further research and
application.
Related papers
- MGMapNet: Multi-Granularity Representation Learning for End-to-End Vectorized HD Map Construction [75.93907511203317]
We propose MGMapNet (Multi-Granularity Map Network) to model map element with a multi-granularity representation.
The proposed MGMapNet achieves state-of-the-art performance, surpassing MapTRv2 by 5.3 mAP on nuScenes and 4.4 mAP on Argoverse2 respectively.
arXiv Detail & Related papers (2024-10-10T09:05:23Z) - Leveraging Enhanced Queries of Point Sets for Vectorized Map Construction [15.324464723174533]
This paper introduces MapQR, an end-to-end method with an emphasis on enhancing query capabilities for constructing online vectorized maps.
MapQR utilizes a novel query design, called scatter-and-gather query, which is modelled by separate content and position parts explicitly.
The proposed MapQR achieves the best mean average precision (mAP) and maintains good efficiency on both nuScenes and Argoverse 2.
arXiv Detail & Related papers (2024-02-27T11:43:09Z) - ADMap: Anti-disturbance framework for reconstructing online vectorized
HD map [9.218463154577616]
This paper proposes the Anti-disturbance Map reconstruction framework (ADMap)
To mitigate point-order jitter, the framework consists of three modules: Multi-Scale Perception Neck, Instance Interactive Attention (IIA), and Vector Direction Difference Loss (VDDL)
arXiv Detail & Related papers (2024-01-24T01:37:27Z) - ScalableMap: Scalable Map Learning for Online Long-Range Vectorized HD
Map Construction [42.874195888422584]
We propose a novel end-to-end pipeline for online long-range vectorized high-definition (HD) map construction using on-board camera sensors.
We exploit the properties of map elements to improve the performance of map construction.
arXiv Detail & Related papers (2023-10-20T09:46:24Z) - MapTRv2: An End-to-End Framework for Online Vectorized HD Map Construction [40.07726377230152]
High-definition (HD) map provides abundant and precise static environmental information of the driving scene.
We present textbfMap textbfTRansformer, an end-to-end framework for online vectorized HD map construction.
arXiv Detail & Related papers (2023-08-10T17:56:53Z) - DETR Doesn't Need Multi-Scale or Locality Design [69.56292005230185]
This paper presents an improved DETR detector that maintains a "plain" nature.
It uses a single-scale feature map and global cross-attention calculations without specific locality constraints.
We show that two simple technologies are surprisingly effective within a plain design to compensate for the lack of multi-scale feature maps and locality constraints.
arXiv Detail & Related papers (2023-08-03T17:59:04Z) - FastMapSVM: Classifying Complex Objects Using the FastMap Algorithm and
Support-Vector Machines [12.728875331529345]
We present FastMapSVM, a novel framework for classifying complex objects.
FastMapSVM combines the strengths of FastMap and Support-Map Machines.
We show that FastMapSVM's performance is comparable to that of other state-of-the-art methods.
arXiv Detail & Related papers (2022-04-07T18:01:16Z) - HDMapGen: A Hierarchical Graph Generative Model of High Definition Maps [81.86923212296863]
HD maps are maps with precise definitions of road lanes with rich semantics of the traffic rules.
There are only a small amount of real-world road topologies and geometries, which significantly limits our ability to test out the self-driving stack.
We propose HDMapGen, a hierarchical graph generation model capable of producing high-quality and diverse HD maps.
arXiv Detail & Related papers (2021-06-28T17:59:30Z) - HDNET: Exploiting HD Maps for 3D Object Detection [99.49035895393934]
We show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors.
We design a single stage detector that extracts geometric and semantic features from the HD maps.
As maps might not be available everywhere, we also propose a map prediction module that estimates the map on the fly from raw LiDAR data.
arXiv Detail & Related papers (2020-12-21T21:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.