Fully Sparse Fusion for 3D Object Detection
- URL: http://arxiv.org/abs/2304.12310v3
- Date: Sun, 28 Apr 2024 05:54:15 GMT
- Title: Fully Sparse Fusion for 3D Object Detection
- Authors: Yingyan Li, Lue Fan, Yang Liu, Zehao Huang, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang,
- Abstract summary: Currently prevalent multimodal 3D detection methods are built upon LiDAR-based detectors that usually use dense Bird's-Eye-View feature maps.
Fully sparse architecture is gaining attention as they are highly efficient in long-range perception.
In this paper, we study how to effectively leverage image modality in the emerging fully sparse architecture.
- Score: 69.32694845027927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently prevalent multimodal 3D detection methods are built upon LiDAR-based detectors that usually use dense Bird's-Eye-View (BEV) feature maps. However, the cost of such BEV feature maps is quadratic to the detection range, making it not suitable for long-range detection. Fully sparse architecture is gaining attention as they are highly efficient in long-range perception. In this paper, we study how to effectively leverage image modality in the emerging fully sparse architecture. Particularly, utilizing instance queries, our framework integrates the well-studied 2D instance segmentation into the LiDAR side, which is parallel to the 3D instance segmentation part in the fully sparse detector. This design achieves a uniform query-based fusion framework in both the 2D and 3D sides while maintaining the fully sparse characteristic. Extensive experiments showcase state-of-the-art results on the widely used nuScenes dataset and the long-range Argoverse 2 dataset. Notably, the inference speed of the proposed method under the long-range LiDAR perception setting is 2.7 $\times$ faster than that of other state-of-the-art multimodal 3D detection methods. Code will be released at \url{https://github.com/BraveGroup/FullySparseFusion}.
Related papers
- What Matters in Range View 3D Object Detection [15.147558647138629]
Lidar-based perception pipelines rely on 3D object detection models to interpret complex scenes.
We achieve state-of-the-art amongst range-view 3D object detection models without using multiple techniques proposed in past range-view literature.
arXiv Detail & Related papers (2024-07-23T18:42:37Z) - OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation [67.56268991234371]
OV-Uni3DETR achieves the state-of-the-art performance on various scenarios, surpassing existing methods by more than 6% on average.
Code and pre-trained models will be released later.
arXiv Detail & Related papers (2024-03-28T17:05:04Z) - SparseFusion: Efficient Sparse Multi-Modal Fusion Framework for Long-Range 3D Perception [47.000734648271006]
We introduce SparseFusion, a novel multi-modal fusion framework built upon sparse 3D features to facilitate efficient long-range perception.
The proposed module introduces sparsity from both semantic and geometric aspects which only fill grids that foreground objects potentially reside in.
On the long-range Argoverse2 dataset, SparseFusion reduces memory footprint and accelerates the inference by about two times compared to dense detectors.
arXiv Detail & Related papers (2024-03-15T05:59:10Z) - Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal
Feature Augmentation [7.364627166256136]
This paper presents a novel framework for robust 3D object detection from point clouds via cross-modal hallucination.
We introduce multiple alignments on both spatial and feature levels to achieve simultaneous backbone refinement and hallucination generation.
Experiments on the View-of-Delft dataset show that our proposed method outperforms the state-of-the-art (SOTA) methods for both radar and LiDAR object detection.
arXiv Detail & Related papers (2023-09-29T15:46:59Z) - Far3D: Expanding the Horizon for Surround-view 3D Object Detection [15.045811199986924]
This paper proposes a novel sparse query-based framework, dubbed Far3D.
By utilizing high-quality 2D object priors, we generate 3D adaptive queries that complement the 3D global queries.
We demonstrate SoTA performance on the challenging Argoverse 2 dataset, covering a wide range of 150 meters.
arXiv Detail & Related papers (2023-08-18T15:19:17Z) - Super Sparse 3D Object Detection [48.684300007948906]
LiDAR-based 3D object detection contributes ever-increasingly to the long-range perception in autonomous driving.
To enable efficient long-range detection, we first propose a fully sparse object detector termed FSD.
FSD++ generates residual points, which indicate the point changes between consecutive frames.
arXiv Detail & Related papers (2023-01-05T17:03:56Z) - Fully Sparse 3D Object Detection [57.05834683261658]
We build a fully sparse 3D object detector (FSD) for long-range LiDAR-based object detection.
FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module.
SIR avoids the time-consuming neighbor queries in previous point-based methods by grouping points into instances.
arXiv Detail & Related papers (2022-07-20T17:01:33Z) - BirdNet+: End-to-End 3D Object Detection in LiDAR Bird's Eye View [117.44028458220427]
On-board 3D object detection in autonomous vehicles often relies on geometry information captured by LiDAR devices.
We present a fully end-to-end 3D object detection framework that can infer oriented 3D boxes solely from BEV images.
arXiv Detail & Related papers (2020-03-09T15:08:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.