AGO-Net: Association-Guided 3D Point Cloud Object Detection Network
- URL: http://arxiv.org/abs/2208.11658v1
- Date: Wed, 24 Aug 2022 16:54:38 GMT
- Title: AGO-Net: Association-Guided 3D Point Cloud Object Detection Network
- Authors: Liang Du, Xiaoqing Ye, Xiao Tan, Edward Johns, Bo Chen, Errui Ding,
Xiangyang Xue, Jianfeng Feng
- Abstract summary: We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
- Score: 86.10213302724085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human brain can effortlessly recognize and localize objects, whereas
current 3D object detection methods based on LiDAR point clouds still report
inferior performance for detecting occluded and distant objects: the point
cloud appearance varies greatly due to occlusion, and has inherent variance in
point densities along the distance to sensors. Therefore, designing feature
representations robust to such point clouds is critical. Inspired by human
associative recognition, we propose a novel 3D detection framework that
associates intact features for objects via domain adaptation. We bridge the gap
between the perceptual domain, where features are derived from real scenes with
sub-optimal representations, and the conceptual domain, where features are
extracted from augmented scenes that consist of non-occlusion objects with rich
detailed information. A feasible method is investigated to construct conceptual
scenes without external datasets. We further introduce an attention-based
re-weighting module that adaptively strengthens the feature adaptation of more
informative regions. The network's feature enhancement ability is exploited
without introducing extra cost during inference, which is plug-and-play in
various 3D detection frameworks. We achieve new state-of-the-art performance on
the KITTI 3D detection benchmark in both accuracy and speed. Experiments on
nuScenes and Waymo datasets also validate the versatility of our method.
Related papers
- PatchContrast: Self-Supervised Pre-training for 3D Object Detection [14.603858163158625]
We introduce PatchContrast, a novel self-supervised point cloud pre-training framework for 3D object detection.
We show that our method outperforms existing state-of-the-art models on three commonly-used 3D detection datasets.
arXiv Detail & Related papers (2023-08-14T07:45:54Z) - Surface-biased Multi-Level Context 3D Object Detection [1.9723551683930771]
This work addresses the object detection task in 3D point clouds using a highly efficient, surface-biased, feature extraction method (wang2022rbgnet)
We propose a 3D object detector that extracts accurate feature representations of object candidates and leverages self-attention on point patches, object candidates, and on the global scene in 3D scene.
arXiv Detail & Related papers (2023-02-13T11:50:04Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - SIENet: Spatial Information Enhancement Network for 3D Object Detection
from Point Cloud [20.84329063509459]
LiDAR-based 3D object detection pushes forward an immense influence on autonomous vehicles.
Due to the limitation of the intrinsic properties of LiDAR, fewer points are collected at the objects farther away from the sensor.
To address the challenge, we propose a novel two-stage 3D object detection framework, named SIENet.
arXiv Detail & Related papers (2021-03-29T07:45:09Z) - Improving Point Cloud Semantic Segmentation by Learning 3D Object
Detection [102.62963605429508]
Point cloud semantic segmentation plays an essential role in autonomous driving.
Current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes.
We propose a novel Aware 3D Semantic Detection (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task.
arXiv Detail & Related papers (2020-09-22T14:17:40Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z) - Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud
Object Detection [64.2159881697615]
Object detection from 3D point clouds remains a challenging task, though recent studies pushed the envelope with the deep learning techniques.
We propose a domain adaptation like approach to enhance the robustness of the feature representation.
Our simple yet effective approach fundamentally boosts the performance of 3D point cloud object detection and achieves the state-of-the-art results.
arXiv Detail & Related papers (2020-06-08T05:15:06Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.