Rethinking Voxelization and Classification for 3D Object Detection
- URL: http://arxiv.org/abs/2301.04058v1
- Date: Tue, 10 Jan 2023 16:22:04 GMT
- Title: Rethinking Voxelization and Classification for 3D Object Detection
- Authors: Youshaa Murhij, Alexander Golodkov, Dmitry Yudin
- Abstract summary: The main challenge in 3D object detection from LiDAR point clouds is achieving real-time performance without affecting the reliability of the network.
We present a solution to improve network inference speed and precision at the same time by implementing a fast dynamic voxelizer.
In addition, we propose a lightweight detection sub-head model for classifying predicted objects and filter out false detected objects.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The main challenge in 3D object detection from LiDAR point clouds is
achieving real-time performance without affecting the reliability of the
network. In other words, the detecting network must be confident enough about
its predictions. In this paper, we present a solution to improve network
inference speed and precision at the same time by implementing a fast dynamic
voxelizer that works on fast pillar-based models in the same way a voxelizer
works on slow voxel-based models. In addition, we propose a lightweight
detection sub-head model for classifying predicted objects and filter out false
detected objects that significantly improves model precision in a negligible
time and computing cost. The developed code is publicly available at:
https://github.com/YoushaaMurhij/RVCDet.
Related papers
- VALO: A Versatile Anytime Framework for LiDAR-based Object Detection Deep Neural Networks [4.953750672237398]
This work addresses the challenge of adapting dynamic deadline requirements for LiDAR object detection deep neural networks (DNNs)
We introduce VALO (Versatile Anytime algorithm for LiDAR Object detection), a novel data-centric approach that enables anytime computing of 3D LiDAR object detection DNNs.
We implement VALO on state-of-the-art 3D LiDAR object detection networks, namely CenterPoint and VoxelNext, and demonstrate its dynamic adaptability to a wide range of time constraints.
arXiv Detail & Related papers (2024-09-17T20:30:35Z) - Diffusion-based 3D Object Detection with Random Boxes [58.43022365393569]
Existing anchor-based 3D detection methods rely on empiricals setting of anchors, which makes the algorithms lack elegance.
Our proposed Diff3Det migrates the diffusion model to proposal generation for 3D object detection by considering the detection boxes as generative targets.
In the inference stage, the model progressively refines a set of random boxes to the prediction results.
arXiv Detail & Related papers (2023-09-05T08:49:53Z) - VoxelNeXt: Fully Sparse VoxelNet for 3D Object Detection and Tracking [78.25819070166351]
We propose VoxelNext for fully sparse 3D object detection.
Our core insight is to predict objects directly based on sparse voxel features, without relying on hand-crafted proxies.
Our strong sparse convolutional network VoxelNeXt detects and tracks 3D objects through voxel features entirely.
arXiv Detail & Related papers (2023-03-20T17:40:44Z) - Anytime-Lidar: Deadline-aware 3D Object Detection [5.491655566898372]
We propose a scheduling algorithm, which intelligently selects the subset of the components to make effective time and accuracy trade-off on the fly.
We apply our approach to a state-of-art 3D object detection network, PointPillars, and evaluate its performance on Jetson Xavier AGX dataset.
arXiv Detail & Related papers (2022-08-25T16:07:10Z) - 3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models [17.487852393066458]
Existing verification method for point cloud model is time-expensive and computationally unattainable on large networks.
We propose 3DVerifier to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation.
Our approach achieves an orders-of-magnitude improvement in verification efficiency for the large network, and the obtained certified bounds are also significantly tighter than the state-of-the-art verifiers.
arXiv Detail & Related papers (2022-07-15T15:31:16Z) - Paint and Distill: Boosting 3D Object Detection with Semantic Passing
Network [70.53093934205057]
3D object detection task from lidar or camera sensors is essential for autonomous driving.
We propose a novel semantic passing framework, named SPNet, to boost the performance of existing lidar-based 3D detection models.
arXiv Detail & Related papers (2022-07-12T12:35:34Z) - Lite-FPN for Keypoint-based Monocular 3D Object Detection [18.03406686769539]
Keypoint-based monocular 3D object detection has made tremendous progress and achieved great speed-accuracy trade-off.
We propose a sort of lightweight feature pyramid network called Lite-FPN to achieve multi-scale feature fusion.
Our proposed method achieves significantly higher accuracy and frame rate at the same time.
arXiv Detail & Related papers (2021-05-01T14:44:31Z) - Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection [99.16162624992424]
We devise a simple but effective voxel-based framework, named Voxel R-CNN.
By taking full advantage of voxel features in a two stage approach, our method achieves comparable detection accuracy with state-of-the-art point-based models.
Our results show that Voxel R-CNN delivers a higher detection accuracy while maintaining a realtime frame processing rate, emphi.e, at a speed of 25 FPS on an NVIDIA 2080 Ti GPU.
arXiv Detail & Related papers (2020-12-31T17:02:46Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.