PV-RCNN++: Semantical Point-Voxel Feature Interaction for 3D Object
Detection
- URL: http://arxiv.org/abs/2208.13414v1
- Date: Mon, 29 Aug 2022 08:14:00 GMT
- Title: PV-RCNN++: Semantical Point-Voxel Feature Interaction for 3D Object
Detection
- Authors: Peng Wu, Lipeng Gu, Xuefeng Yan, Haoran Xie, Fu Lee Wang, Gary Cheng,
Mingqiang Wei
- Abstract summary: This paper proposes a novel object detection network by semantical point-voxel feature interaction, dubbed PV-RCNN++.
Experiments on the KITTI dataset show that PV-RCNN++ achieves 81.60$%$, 40.18$%$, 68.21$%$ 3D mAP on Car, Pedestrian, and Cyclist, achieving comparable or even better performance to the state-of-the-arts.
- Score: 22.6659359032306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large imbalance often exists between the foreground points (i.e., objects)
and the background points in outdoor LiDAR point clouds. It hinders
cutting-edge detectors from focusing on informative areas to produce accurate
3D object detection results. This paper proposes a novel object detection
network by semantical point-voxel feature interaction, dubbed PV-RCNN++. Unlike
most of existing methods, PV-RCNN++ explores the semantic information to
enhance the quality of object detection. First, a semantic segmentation module
is proposed to retain more discriminative foreground keypoints. Such a module
will guide our PV-RCNN++ to integrate more object-related point-wise and
voxel-wise features in the pivotal areas. Then, to make points and voxels
interact efficiently, we utilize voxel query based on Manhattan distance to
quickly sample voxel-wise features around keypoints. Such the voxel query will
reduce the time complexity from O(N) to O(K), compared to the ball query.
Further, to avoid being stuck in learning only local features, an
attention-based residual PointNet module is designed to expand the receptive
field to adaptively aggregate the neighboring voxel-wise features into
keypoints. Extensive experiments on the KITTI dataset show that PV-RCNN++
achieves 81.60$\%$, 40.18$\%$, 68.21$\%$ 3D mAP on Car, Pedestrian, and
Cyclist, achieving comparable or even better performance to the
state-of-the-arts.
Related papers
- Multi-scale Feature Fusion with Point Pyramid for 3D Object Detection [18.41721888099563]
This paper proposes the Point Pyramid RCNN (POP-RCNN), a feature pyramid-based framework for 3D object detection on point clouds.
The proposed method can be applied to a variety of existing frameworks to increase feature richness, especially for long-distance detection.
arXiv Detail & Related papers (2024-09-06T20:13:14Z) - PVAFN: Point-Voxel Attention Fusion Network with Multi-Pooling Enhancing for 3D Object Detection [59.355022416218624]
integration of point and voxel representations is becoming more common in LiDAR-based 3D object detection.
We propose a novel two-stage 3D object detector, called Point-Voxel Attention Fusion Network (PVAFN)
PVAFN uses a multi-pooling strategy to integrate both multi-scale and region-specific information effectively.
arXiv Detail & Related papers (2024-08-26T19:43:01Z) - PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer [75.2251801053839]
We present a novel Point-Voxel Transformer for single-stage 3D detection (PVT-SSD)
We propose a Point-Voxel Transformer (PVT) module that obtains long-range contexts in a cheap manner from voxels.
The experiments on several autonomous driving benchmarks verify the effectiveness and efficiency of the proposed method.
arXiv Detail & Related papers (2023-05-11T07:37:15Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - 3D Object Detection Combining Semantic and Geometric Features from Point
Clouds [19.127930862527666]
We propose a novel end-to-end two-stage 3D object detector named SGNet for point clouds scenes.
The VTPM is a Voxel-Point-Based Module that finally implements 3D object detection in point space.
As of September 19, 2021, for KITTI dataset, SGNet ranked 1st in 3D and BEV detection on cyclists with easy difficulty level, and 2nd in the 3D detection of moderate cyclists.
arXiv Detail & Related papers (2021-10-10T04:43:27Z) - VIN: Voxel-based Implicit Network for Joint 3D Object Detection and
Segmentation for Lidars [12.343333815270402]
A unified neural network structure is presented for joint 3D object detection and point cloud segmentation.
We leverage rich supervision from both detection and segmentation labels rather than using just one of them.
arXiv Detail & Related papers (2021-07-07T02:16:20Z) - PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector
Representation for 3D Object Detection [100.60209139039472]
We propose the PointVoxel Region based Convolution Neural Networks (PVRCNNs) for accurate 3D detection from point clouds.
Our proposed PV-RCNNs significantly outperform previous state-of-the-art 3D detection methods on both the Open dataset and the highly-competitive KITTI benchmark.
arXiv Detail & Related papers (2021-01-31T14:51:49Z) - PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection [76.30585706811993]
We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN)
Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction.
It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks.
arXiv Detail & Related papers (2019-12-31T06:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.