Object as Hotspots: An Anchor-Free 3D Object Detection Approach via
Firing of Hotspots
- URL: http://arxiv.org/abs/1912.12791v3
- Date: Tue, 13 Oct 2020 05:04:42 GMT
- Title: Object as Hotspots: An Anchor-Free 3D Object Detection Approach via
Firing of Hotspots
- Authors: Qi Chen, Lin Sun, Zhixin Wang, Kui Jia, Alan Yuille
- Abstract summary: We argue for an approach opposite to existing methods using object-level anchors.
Inspired by compositional models, we propose an object as composition of its interior non-empty voxels, termed hotspots.
Based on OHS, we propose an anchor-free detection head with a novel ground truth assignment strategy.
- Score: 37.16690737208046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate 3D object detection in LiDAR based point clouds suffers from the
challenges of data sparsity and irregularities. Existing methods strive to
organize the points regularly, e.g. voxelize, pass them through a designed
2D/3D neural network, and then define object-level anchors that predict offsets
of 3D bounding boxes using collective evidences from all the points on the
objects of interest. Contrary to the state-of-the-art anchor-based methods,
based on the very nature of data sparsity, we observe that even points on an
individual object part are informative about semantic information of the
object. We thus argue in this paper for an approach opposite to existing
methods using object-level anchors. Inspired by compositional models, which
represent an object as parts and their spatial relations, we propose to
represent an object as composition of its interior non-empty voxels, termed
hotspots, and the spatial relations of hotspots. This gives rise to the
representation of Object as Hotspots (OHS). Based on OHS, we further propose an
anchor-free detection head with a novel ground truth assignment strategy that
deals with inter-object point-sparsity imbalance to prevent the network from
biasing towards objects with more points. Experimental results show that our
proposed method works remarkably well on objects with a small number of points.
Notably, our approach ranked 1st on KITTI 3D Detection Benchmark for cyclist
and pedestrian detection, and achieved state-of-the-art performance on NuScenes
3D Detection Benchmark.
Related papers
- SeSame: Simple, Easy 3D Object Detection with Point-Wise Semantics [0.7373617024876725]
In autonomous driving, 3D object detection provides more precise information for downstream tasks, including path planning and motion estimation.
We propose SeSame: a method aimed at enhancing semantic information in existing LiDAR-only based 3D object detection.
Experiments demonstrate the effectiveness of our method with performance improvements on the KITTI object detection benchmark.
arXiv Detail & Related papers (2024-03-11T08:17:56Z) - 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding [58.924180772480504]
3D visual grounding aims to localize the target object in a 3D point cloud by a free-form language description.
We propose a relation-aware one-stage framework, named 3D Relative Position-aware Network (3-Net)
arXiv Detail & Related papers (2023-07-25T09:33:25Z) - PSA-Det3D: Pillar Set Abstraction for 3D object Detection [14.788139868324155]
We propose a pillar set abstraction (PSA) and foreground point compensation (FPC) to improve the detection performance for small object.
The experiments on the KITTI 3D detection benchmark show that our proposed PSA-Det3D outperforms other algorithms with high accuracy for small object detection.
arXiv Detail & Related papers (2022-10-20T03:05:34Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - RBGNet: Ray-based Grouping for 3D Object Detection [104.98776095895641]
We propose the RBGNet framework, a voting-based 3D detector for accurate 3D object detection from point clouds.
We propose a ray-based feature grouping module, which aggregates the point-wise features on object surfaces using a group of determined rays.
Our model achieves state-of-the-art 3D detection performance on ScanNet V2 and SUN RGB-D with remarkable performance gains.
arXiv Detail & Related papers (2022-04-05T14:42:57Z) - ImpDet: Exploring Implicit Fields for 3D Object Detection [74.63774221984725]
We introduce a new perspective that views bounding box regression as an implicit function.
This leads to our proposed framework, termed Implicit Detection or ImpDet.
Our ImpDet assigns specific values to points in different local 3D spaces, thereby high-quality boundaries can be generated.
arXiv Detail & Related papers (2022-03-31T17:52:12Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - 3D Object Detection Combining Semantic and Geometric Features from Point
Clouds [19.127930862527666]
We propose a novel end-to-end two-stage 3D object detector named SGNet for point clouds scenes.
The VTPM is a Voxel-Point-Based Module that finally implements 3D object detection in point space.
As of September 19, 2021, for KITTI dataset, SGNet ranked 1st in 3D and BEV detection on cyclists with easy difficulty level, and 2nd in the 3D detection of moderate cyclists.
arXiv Detail & Related papers (2021-10-10T04:43:27Z) - Group-Free 3D Object Detection via Transformers [26.040378025818416]
We present a simple yet effective method for directly detecting 3D objects from the 3D point cloud.
Our method computes the feature of an object from all the points in the point cloud with the help of an attention mechanism in the Transformers citevaswaniattention.
With few bells and whistles, the proposed method achieves state-of-the-art 3D object detection performance on two widely used benchmarks, ScanNet V2 and SUN RGB-D.
arXiv Detail & Related papers (2021-04-01T17:59:36Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.