Back-tracing Representative Points for Voting-based 3D Object Detection
in Point Clouds
- URL: http://arxiv.org/abs/2104.06114v2
- Date: Wed, 14 Apr 2021 06:38:30 GMT
- Title: Back-tracing Representative Points for Voting-based 3D Object Detection
in Point Clouds
- Authors: Bowen Cheng, Lu Sheng, Shaoshuai Shi, Ming Yang, Dong Xu
- Abstract summary: We introduce a new 3D object detection method named as Back-tracing Representative Points Network (BRNet)
BRNet generatively back-traces the representative points from the vote centers and also revisits complementary seed points around these generated points.
Our BRNet is simple but effective, which significantly outperforms the state-of-the-art methods on two large-scale point cloud datasets.
- Score: 42.24217764222523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D object detection in point clouds is a challenging vision task that
benefits various applications for understanding the 3D visual world. Lots of
recent research focuses on how to exploit end-to-end trainable Hough voting for
generating object proposals. However, the current voting strategy can only
receive partial votes from the surfaces of potential objects together with
severe outlier votes from the cluttered backgrounds, which hampers full
utilization of the information from the input point clouds. Inspired by the
back-tracing strategy in the conventional Hough voting methods, in this work,
we introduce a new 3D object detection method, named as Back-tracing
Representative Points Network (BRNet), which generatively back-traces the
representative points from the vote centers and also revisits complementary
seed points around these generated points, so as to better capture the fine
local structural features surrounding the potential objects from the raw point
clouds. Therefore, this bottom-up and then top-down strategy in our BRNet
enforces mutual consistency between the predicted vote centers and the raw
surface points and thus achieves more reliable and flexible object localization
and class prediction results. Our BRNet is simple but effective, which
significantly outperforms the state-of-the-art methods on two large-scale point
cloud datasets, ScanNet V2 (+7.5% in terms of mAP@0.50) and SUN RGB-D (+4.7% in
terms of mAP@0.50), while it is still lightweight and efficient. Code will be
available at https://github.com/cheng052/BRNet.
Related papers
- 3D Object Detection from Point Cloud via Voting Step Diffusion [52.9966883689137]
existing voting-based methods often receive votes from the partial surfaces of individual objects together with severe noises, leading to sub-optimal detection performance.
We propose a new method to move random 3D points toward the high-density region of the distribution by estimating the score function of the distribution with a noise conditioned score network.
Experiments on two large scale indoor 3D scene datasets, SUN RGB-D and ScanNet V2, demonstrate the superiority of our proposed method.
arXiv Detail & Related papers (2024-03-21T05:04:52Z) - Object Detection in 3D Point Clouds via Local Correlation-Aware Point
Embedding [0.0]
We present an improved approach for 3D object detection in point cloud data based on the Frustum PointNet (F-PointNet)
Compared to the original F-PointNet, our newly proposed method considers the point neighborhood when computing point features.
arXiv Detail & Related papers (2023-01-11T18:14:47Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - RBGNet: Ray-based Grouping for 3D Object Detection [104.98776095895641]
We propose the RBGNet framework, a voting-based 3D detector for accurate 3D object detection from point clouds.
We propose a ray-based feature grouping module, which aggregates the point-wise features on object surfaces using a group of determined rays.
Our model achieves state-of-the-art 3D detection performance on ScanNet V2 and SUN RGB-D with remarkable performance gains.
arXiv Detail & Related papers (2022-04-05T14:42:57Z) - Group-Free 3D Object Detection via Transformers [26.040378025818416]
We present a simple yet effective method for directly detecting 3D objects from the 3D point cloud.
Our method computes the feature of an object from all the points in the point cloud with the help of an attention mechanism in the Transformers citevaswaniattention.
With few bells and whistles, the proposed method achieves state-of-the-art 3D object detection performance on two widely used benchmarks, ScanNet V2 and SUN RGB-D.
arXiv Detail & Related papers (2021-04-01T17:59:36Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z) - Object as Hotspots: An Anchor-Free 3D Object Detection Approach via
Firing of Hotspots [37.16690737208046]
We argue for an approach opposite to existing methods using object-level anchors.
Inspired by compositional models, we propose an object as composition of its interior non-empty voxels, termed hotspots.
Based on OHS, we propose an anchor-free detection head with a novel ground truth assignment strategy.
arXiv Detail & Related papers (2019-12-30T03:02:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.