*: Improving the 3D detector by introducing Voxel2Pillar feature encoding and extracting multi-scale features
- URL: http://arxiv.org/abs/2405.09828v4
- Date: Wed, 13 Nov 2024 08:34:48 GMT
- Title: *: Improving the 3D detector by introducing Voxel2Pillar feature encoding and extracting multi-scale features
- Authors: Xusheng Li, Chengliang Wang, Shumao Wang, Zhuo Zeng, Ji Liu,
- Abstract summary: Current 3D detectors commonly use feature pyramid networks to obtain large-scale features.
Since pillar-based schemes require much less than voxel computation schemes, they are more suitable for constructing realtime 3D detectors.
We propose the Voxel2Pillar feature encoding, which uses a sparse convolution to construct pillars with richer point cloud features.
- Score: 9.15169530632709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The multi-line LiDAR is widely used in autonomous vehicles, so point cloud-based 3D detectors are essential for autonomous driving. Extracting rich multi-scale features is crucial for point cloud-based 3D detectors in autonomous driving due to significant differences in the size of different types of objects. However, because of the real-time requirements, large-size convolution kernels are rarely used to extract large-scale features in the backbone. Current 3D detectors commonly use feature pyramid networks to obtain large-scale features; however, some objects containing fewer point clouds are further lost during down-sampling, resulting in degraded performance. Since pillar-based schemes require much less computation than voxel-based schemes, they are more suitable for constructing real-time 3D detectors. Hence, we propose the *, a pillar-based scheme. We redesigned the feature encoding, the backbone, and the neck of the 3D detector. We propose the Voxel2Pillar feature encoding, which uses a sparse convolution constructor to construct pillars with richer point cloud features, especially height features. The Voxel2Pillar adds more learnable parameters to the feature encoding, enabling the initial pillars to have higher performance ability. We extract multi-scale and large-scale features in the proposed fully sparse backbone, which does not utilize large-size convolutional kernels; the backbone consists of the proposed multi-scale feature extraction module. The neck consists of the proposed sparse ConvNeXt, whose simple structure significantly improves the performance. We validate the effectiveness of the proposed * on the Waymo Open Dataset, and the object detection accuracy for vehicles, pedestrians, and cyclists is improved. We also verify the effectiveness of each proposed module in detail through ablation studies.
Related papers
- GO-N3RDet: Geometry Optimized NeRF-enhanced 3D Object Detector [22.82676897012763]
GO-N3RDet is a scene-geometry optimized multi-view 3D object detector enhanced by neural radiance fields.
We introduce a unique 3D positional information embedded voxel optimization mechanism to fuse multi-view features.
Our unique modules synergetically form an end-to-end neural model that establishes new state-of-the-art in NeRF-based multi-view 3D detection.
arXiv Detail & Related papers (2025-03-19T13:51:00Z) - SparseVoxFormer: Sparse Voxel-based Transformer for Multi-modal 3D Object Detection [12.941263635455915]
Most previous 3D object detection methods utilize the Bird's Eye View (BEV) space for intermediate feature representation.
This paper focuses on the sparse nature of LiDAR point cloud data.
We introduce a novel sparse voxel-based transformer network for 3D object detection, dubbed as SparseVoxFormer.
arXiv Detail & Related papers (2025-03-11T06:52:25Z) - Multi-scale Feature Fusion with Point Pyramid for 3D Object Detection [18.41721888099563]
This paper proposes the Point Pyramid RCNN (POP-RCNN), a feature pyramid-based framework for 3D object detection on point clouds.
The proposed method can be applied to a variety of existing frameworks to increase feature richness, especially for long-distance detection.
arXiv Detail & Related papers (2024-09-06T20:13:14Z) - PVAFN: Point-Voxel Attention Fusion Network with Multi-Pooling Enhancing for 3D Object Detection [59.355022416218624]
integration of point and voxel representations is becoming more common in LiDAR-based 3D object detection.
We propose a novel two-stage 3D object detector, called Point-Voxel Attention Fusion Network (PVAFN)
PVAFN uses a multi-pooling strategy to integrate both multi-scale and region-specific information effectively.
arXiv Detail & Related papers (2024-08-26T19:43:01Z) - HEDNet: A Hierarchical Encoder-Decoder Network for 3D Object Detection
in Point Clouds [19.1921315424192]
3D object detection in point clouds is important for autonomous driving systems.
A primary challenge in 3D object detection stems from the sparse distribution of points within the 3D scene.
We propose HEDNet, a hierarchical encoder-decoder network for 3D object detection.
arXiv Detail & Related papers (2023-10-31T07:32:08Z) - 3D Small Object Detection with Dynamic Spatial Pruning [62.72638845817799]
We propose an efficient feature pruning strategy for 3D small object detection.
We present a multi-level 3D detector named DSPDet3D which benefits from high spatial resolution.
It takes less than 2s to directly process a whole building consisting of more than 4500k points while detecting out almost all objects.
arXiv Detail & Related papers (2023-05-05T17:57:04Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - PillarNet: Real-Time and High-Performance Pillar-based 3D Object
Detection [4.169126928311421]
Real-time and high-performance 3D object detection is of critical importance for autonomous driving.
Recent top-performing 3D object detectors mainly rely on point-based or 3D voxel-based convolutions.
We develop a real-time and high-performance pillar-based detector, dubbed PillarNet.
arXiv Detail & Related papers (2022-05-16T00:14:50Z) - PiFeNet: Pillar-Feature Network for Real-Time 3D Pedestrian Detection
from Point Cloud [64.12626752721766]
We present PiFeNet, an efficient real-time 3D detector for pedestrian detection from point clouds.
We address two challenges that 3D object detection frameworks encounter when detecting pedestrians: low of pillar features and small occupation areas of pedestrians in point clouds.
Our approach is ranked 1st in KITTI pedestrian BEV and 3D leaderboards while running at 26 frames per second (FPS), and achieves state-of-the-art performance on Nuscenes detection benchmark.
arXiv Detail & Related papers (2021-12-31T13:41:37Z) - EGFN: Efficient Geometry Feature Network for Fast Stereo 3D Object
Detection [51.52496693690059]
Fast stereo based 3D object detectors lag far behind high-precision oriented methods in accuracy.
We argue that the main reason is the missing or poor 3D geometry feature representation in fast stereo based methods.
The proposed EGFN outperforms YOLOStsereo3D, the advanced fast method, by 5.16% on mAP$_3d$ at the cost of merely additional 12 ms.
arXiv Detail & Related papers (2021-11-28T05:25:36Z) - Improved Pillar with Fine-grained Feature for 3D Object Detection [23.348710029787068]
3D object detection with LiDAR point clouds plays an important role in autonomous driving perception module.
Existing point-based methods are challenging to reach the speed requirements because of too many raw points.
The 2D grid-based methods, such as PointPillar, can easily achieve a stable and efficient speed based on simple 2D convolution.
arXiv Detail & Related papers (2021-10-12T14:53:14Z) - HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object
Detection [39.64891219500416]
3D object detection methods exploit either voxel-based or point-based features to represent 3D objects in a scene.
We introduce in this paper a novel single-stage 3D detection method having the merit of both voxel-based and point-based features.
arXiv Detail & Related papers (2021-04-02T06:34:49Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.