Surface-biased Multi-Level Context 3D Object Detection
- URL: http://arxiv.org/abs/2302.06291v1
- Date: Mon, 13 Feb 2023 11:50:04 GMT
- Title: Surface-biased Multi-Level Context 3D Object Detection
- Authors: Sultan Abu Ghazal, Jean Lahoud and Rao Anwer
- Abstract summary: This work addresses the object detection task in 3D point clouds using a highly efficient, surface-biased, feature extraction method (wang2022rbgnet)
We propose a 3D object detector that extracts accurate feature representations of object candidates and leverages self-attention on point patches, object candidates, and on the global scene in 3D scene.
- Score: 1.9723551683930771
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object detection in 3D point clouds is a crucial task in a range of computer
vision applications including robotics, autonomous cars, and augmented reality.
This work addresses the object detection task in 3D point clouds using a highly
efficient, surface-biased, feature extraction method (wang2022rbgnet), that
also captures contextual cues on multiple levels. We propose a 3D object
detector that extracts accurate feature representations of object candidates
and leverages self-attention on point patches, object candidates, and on the
global scene in 3D scene. Self-attention is proven to be effective in encoding
correlation information in 3D point clouds by (xie2020mlcvnet). While other 3D
detectors focus on enhancing point cloud feature extraction by selectively
obtaining more meaningful local features (wang2022rbgnet) where contextual
information is overlooked. To this end, the proposed architecture uses
ray-based surface-biased feature extraction and multi-level context encoding to
outperform the state-of-the-art 3D object detector. In this work, 3D detection
experiments are performed on scenes from the ScanNet dataset whereby the
self-attention modules are introduced one after the other to isolate the effect
of self-attention at each level.
Related papers
- 3D Small Object Detection with Dynamic Spatial Pruning [62.72638845817799]
We propose an efficient feature pruning strategy for 3D small object detection.
We present a multi-level 3D detector named DSPDet3D which benefits from high spatial resolution.
It takes less than 2s to directly process a whole building consisting of more than 4500k points while detecting out almost all objects.
arXiv Detail & Related papers (2023-05-05T17:57:04Z) - OA-BEV: Bringing Object Awareness to Bird's-Eye-View Representation for
Multi-Camera 3D Object Detection [78.38062015443195]
OA-BEV is a network that can be plugged into the BEV-based 3D object detection framework.
Our method achieves consistent improvements over the BEV-based baselines in terms of both average precision and nuScenes detection score.
arXiv Detail & Related papers (2023-01-13T06:02:31Z) - CMR3D: Contextualized Multi-Stage Refinement for 3D Object Detection [57.44434974289945]
We propose Contextualized Multi-Stage Refinement for 3D Object Detection (CMR3D) framework.
Our framework takes a 3D scene as input and strives to explicitly integrate useful contextual information of the scene.
In addition to 3D object detection, we investigate the effectiveness of our framework for the problem of 3D object counting.
arXiv Detail & Related papers (2022-09-13T05:26:09Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Investigating Attention Mechanism in 3D Point Cloud Object Detection [25.53702053256288]
This work investigates the role of the attention mechanism in 3D point cloud object detection.
It provides insights into the potential of different attention modules.
This paper is expected to serve as a reference source for benefiting attention-embedded 3D point cloud object detection.
arXiv Detail & Related papers (2021-08-02T03:54:39Z) - Group-Free 3D Object Detection via Transformers [26.040378025818416]
We present a simple yet effective method for directly detecting 3D objects from the 3D point cloud.
Our method computes the feature of an object from all the points in the point cloud with the help of an attention mechanism in the Transformers citevaswaniattention.
With few bells and whistles, the proposed method achieves state-of-the-art 3D object detection performance on two widely used benchmarks, ScanNet V2 and SUN RGB-D.
arXiv Detail & Related papers (2021-04-01T17:59:36Z) - RoIFusion: 3D Object Detection from LiDAR and Vision [7.878027048763662]
We propose a novel fusion algorithm by projecting a set of 3D Region of Interests (RoIs) from the point clouds to the 2D RoIs of the corresponding the images.
Our approach achieves state-of-the-art performance on the KITTI 3D object detection challenging benchmark.
arXiv Detail & Related papers (2020-09-09T20:23:27Z) - Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud
Object Detection [64.2159881697615]
Object detection from 3D point clouds remains a challenging task, though recent studies pushed the envelope with the deep learning techniques.
We propose a domain adaptation like approach to enhance the robustness of the feature representation.
Our simple yet effective approach fundamentally boosts the performance of 3D point cloud object detection and achieves the state-of-the-art results.
arXiv Detail & Related papers (2020-06-08T05:15:06Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.