From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud with
Voxel-to-Point Decoder
- URL: http://arxiv.org/abs/2108.03648v1
- Date: Sun, 8 Aug 2021 14:30:13 GMT
- Title: From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud with
Voxel-to-Point Decoder
- Authors: Jiale Li and Hang Dai and Ling Shao and Yong Ding
- Abstract summary: We present an Intersection-over-Union (IoU) guided two-stage 3D object detector with a voxel-to-point decoder.
We propose a residual voxel-to-point decoder to extract the point features in addition to the map-view features from the voxel based Region Proposal Network (RPN)
We propose a simple and efficient method to align the estimated IoUs to the refined proposal boxes as a more relevant localization confidence.
- Score: 79.39041453836793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present an Intersection-over-Union (IoU) guided two-stage
3D object detector with a voxel-to-point decoder. To preserve the necessary
information from all raw points and maintain the high box recall in voxel based
Region Proposal Network (RPN), we propose a residual voxel-to-point decoder to
extract the point features in addition to the map-view features from the voxel
based RPN. We use a 3D Region of Interest (RoI) alignment to crop and align the
features with the proposal boxes for accurately perceiving the object position.
The RoI-Aligned features are finally aggregated with the corner geometry
embeddings that can provide the potentially missing corner information in the
box refinement stage. We propose a simple and efficient method to align the
estimated IoUs to the refined proposal boxes as a more relevant localization
confidence. The comprehensive experiments on KITTI and Waymo Open Dataset
demonstrate that our method achieves significant improvements with novel
architectures against the existing methods. The code is available on Github
URL\footnote{\url{https://github.com/jialeli1/From-Voxel-to-Point}}.
Related papers
- VoxelKP: A Voxel-based Network Architecture for Human Keypoint
Estimation in LiDAR Data [53.638818890966036]
textitVoxelKP is a novel fully sparse network architecture tailored for human keypoint estimation in LiDAR data.
We introduce sparse box-attention to focus on learning spatial correlations between keypoints within each human instance.
We incorporate a spatial encoding to leverage absolute 3D coordinates when projecting 3D voxels to a 2D grid encoding a bird's eye view.
arXiv Detail & Related papers (2023-12-11T23:50:14Z) - V-DETR: DETR with Vertex Relative Position Encoding for 3D Object
Detection [73.37781484123536]
We introduce a highly performant 3D object detector for point clouds using the DETR framework.
To address the limitation, we introduce a novel 3D Relative Position (3DV-RPE) method.
We show exceptional results on the challenging ScanNetV2 benchmark.
arXiv Detail & Related papers (2023-08-08T17:14:14Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - 3D Object Detection Combining Semantic and Geometric Features from Point
Clouds [19.127930862527666]
We propose a novel end-to-end two-stage 3D object detector named SGNet for point clouds scenes.
The VTPM is a Voxel-Point-Based Module that finally implements 3D object detection in point space.
As of September 19, 2021, for KITTI dataset, SGNet ranked 1st in 3D and BEV detection on cyclists with easy difficulty level, and 2nd in the 3D detection of moderate cyclists.
arXiv Detail & Related papers (2021-10-10T04:43:27Z) - 3D-SiamRPN: An End-to-End Learning Method for Real-Time 3D Single Object
Tracking Using Raw Point Cloud [9.513194898261787]
We propose a 3D tracking method called 3D-SiamRPN Network to track a single target object by using raw 3D point cloud data.
Experimental results on KITTI dataset show that our method has a competitive performance in both Success and Precision.
arXiv Detail & Related papers (2021-08-12T09:52:28Z) - Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection [99.16162624992424]
We devise a simple but effective voxel-based framework, named Voxel R-CNN.
By taking full advantage of voxel features in a two stage approach, our method achieves comparable detection accuracy with state-of-the-art point-based models.
Our results show that Voxel R-CNN delivers a higher detection accuracy while maintaining a realtime frame processing rate, emphi.e, at a speed of 25 FPS on an NVIDIA 2080 Ti GPU.
arXiv Detail & Related papers (2020-12-31T17:02:46Z) - Stereo RGB and Deeper LIDAR Based Network for 3D Object Detection [40.34710686994996]
3D object detection has become an emerging task in autonomous driving scenarios.
Previous works process 3D point clouds using either projection-based or voxel-based models.
We propose the Stereo RGB and Deeper LIDAR framework which can utilize semantic and spatial information simultaneously.
arXiv Detail & Related papers (2020-06-09T11:19:24Z) - PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection [76.30585706811993]
We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN)
Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction.
It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks.
arXiv Detail & Related papers (2019-12-31T06:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.