MapFusion: A General Framework for 3D Object Detection with HDMaps
- URL: http://arxiv.org/abs/2103.05929v1
- Date: Wed, 10 Mar 2021 08:36:59 GMT
- Title: MapFusion: A General Framework for 3D Object Detection with HDMaps
- Authors: Jin Fang, Dingfu Zhou, Xibin Song, Liangjun Zhang
- Abstract summary: We propose MapFusion to integrate the map information into modern 3D object detector pipelines.
By fusing the map information, we can achieve 1.27 to 2.79 points improvements for mean Average Precision (mAP) on three strong 3d object detection baselines.
- Score: 17.482961825285013
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: 3D object detection is a key perception component in autonomous driving. Most
recent approaches are based on Lidar sensors only or fused with cameras. Maps
(e.g., High Definition Maps), a basic infrastructure for intelligent vehicles,
however, have not been well exploited for boosting object detection tasks. In
this paper, we propose a simple but effective framework - MapFusion to
integrate the map information into modern 3D object detector pipelines. In
particular, we design a FeatureAgg module for HD Map feature extraction and
fusion, and a MapSeg module as an auxiliary segmentation head for the detection
backbone. Our proposed MapFusion is detector independent and can be easily
integrated into different detectors. The experimental results of three
different baselines on large public autonomous driving dataset demonstrate the
superiority of the proposed framework. By fusing the map information, we can
achieve 1.27 to 2.79 points improvements for mean Average Precision (mAP) on
three strong 3d object detection baselines.
Related papers
- SeSame: Simple, Easy 3D Object Detection with Point-Wise Semantics [0.7373617024876725]
In autonomous driving, 3D object detection provides more precise information for downstream tasks, including path planning and motion estimation.
We propose SeSame: a method aimed at enhancing semantic information in existing LiDAR-only based 3D object detection.
Experiments demonstrate the effectiveness of our method with performance improvements on the KITTI object detection benchmark.
arXiv Detail & Related papers (2024-03-11T08:17:56Z) - InsMapper: Exploring Inner-instance Information for Vectorized HD
Mapping [41.59891369655983]
InsMapper harnesses inner-instance information for vectorized high-definition mapping through transformers.
InsMapper surpasses the previous state-of-the-art method, demonstrating its effectiveness and generality.
arXiv Detail & Related papers (2023-08-16T17:58:28Z) - Fully Sparse 3D Object Detection [57.05834683261658]
We build a fully sparse 3D object detector (FSD) for long-range LiDAR-based object detection.
FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module.
SIR avoids the time-consuming neighbor queries in previous point-based methods by grouping points into instances.
arXiv Detail & Related papers (2022-07-20T17:01:33Z) - Paint and Distill: Boosting 3D Object Detection with Semantic Passing
Network [70.53093934205057]
3D object detection task from lidar or camera sensors is essential for autonomous driving.
We propose a novel semantic passing framework, named SPNet, to boost the performance of existing lidar-based 3D detection models.
arXiv Detail & Related papers (2022-07-12T12:35:34Z) - DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection [83.18142309597984]
Lidars and cameras are critical sensors that provide complementary information for 3D detection in autonomous driving.
We develop a family of generic multi-modal 3D detection models named DeepFusion, which is more accurate than previous methods.
arXiv Detail & Related papers (2022-03-15T18:46:06Z) - A Versatile Multi-View Framework for LiDAR-based 3D Object Detection
with Guidance from Panoptic Segmentation [9.513467995188634]
3D object detection using LiDAR data is an indispensable component for autonomous driving systems.
We propose a novel multi-task framework that jointly performs 3D object detection and panoptic segmentation.
arXiv Detail & Related papers (2022-03-04T04:57:05Z) - VIN: Voxel-based Implicit Network for Joint 3D Object Detection and
Segmentation for Lidars [12.343333815270402]
A unified neural network structure is presented for joint 3D object detection and point cloud segmentation.
We leverage rich supervision from both detection and segmentation labels rather than using just one of them.
arXiv Detail & Related papers (2021-07-07T02:16:20Z) - M3DeTR: Multi-representation, Multi-scale, Mutual-relation 3D Object
Detection with Transformers [78.48081972698888]
We present M3DeTR, which combines different point cloud representations with different feature scales based on multi-scale feature pyramids.
M3DeTR is the first approach that unifies multiple point cloud representations, feature scales, as well as models mutual relationships between point clouds simultaneously using transformers.
arXiv Detail & Related papers (2021-04-24T06:48:23Z) - HDNET: Exploiting HD Maps for 3D Object Detection [99.49035895393934]
We show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors.
We design a single stage detector that extracts geometric and semantic features from the HD maps.
As maps might not be available everywhere, we also propose a map prediction module that estimates the map on the fly from raw LiDAR data.
arXiv Detail & Related papers (2020-12-21T21:59:54Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.