Uni3DETR: Unified 3D Detection Transformer
- URL: http://arxiv.org/abs/2310.05699v1
- Date: Mon, 9 Oct 2023 13:20:20 GMT
- Title: Uni3DETR: Unified 3D Detection Transformer
- Authors: Zhenyu Wang, Yali Li, Xi Chen, Hengshuang Zhao, Shengjin Wang
- Abstract summary: We propose a unified 3D detector that addresses indoor and outdoor detection within the same framework.
Specifically, we employ the detection transformer with point-voxel interaction for object prediction.
We then propose the mixture of query points, which sufficiently exploits global information for dense small-range indoor scenes and local information for large-range sparse outdoor ones.
- Score: 75.35012428550135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing point cloud based 3D detectors are designed for the particular
scene, either indoor or outdoor ones. Because of the substantial differences in
object distribution and point density within point clouds collected from
various environments, coupled with the intricate nature of 3D metrics, there is
still a lack of a unified network architecture that can accommodate diverse
scenes. In this paper, we propose Uni3DETR, a unified 3D detector that
addresses indoor and outdoor 3D detection within the same framework.
Specifically, we employ the detection transformer with point-voxel interaction
for object prediction, which leverages voxel features and points for
cross-attention and behaves resistant to the discrepancies from data. We then
propose the mixture of query points, which sufficiently exploits global
information for dense small-range indoor scenes and local information for
large-range sparse outdoor ones. Furthermore, our proposed decoupled IoU
provides an easy-to-optimize training target for localization by disentangling
the xy and z space. Extensive experiments validate that Uni3DETR exhibits
excellent performance consistently on both indoor and outdoor 3D detection. In
contrast to previous specialized detectors, which may perform well on some
particular datasets but suffer a substantial degradation on different scenes,
Uni3DETR demonstrates the strong generalization ability under heterogeneous
conditions (Fig. 1).
Codes are available at
\href{https://github.com/zhenyuw16/Uni3DETR}{https://github.com/zhenyuw16/Uni3DETR}.
Related papers
- OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation [67.56268991234371]
OV-Uni3DETR achieves the state-of-the-art performance on various scenarios, surpassing existing methods by more than 6% on average.
Code and pre-trained models will be released later.
arXiv Detail & Related papers (2024-03-28T17:05:04Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Unifying Voxel-based Representation with Transformer for 3D Object
Detection [143.91910747605107]
We present a unified framework for multi-modality 3D object detection, named UVTR.
The proposed method aims to unify multi-modality representations in the voxel space for accurate and robust single- or cross-modality 3D detection.
UVTR achieves leading performance in the nuScenes test set with 69.7%, 55.1%, and 71.1% NDS for LiDAR, camera, and multi-modality inputs, respectively.
arXiv Detail & Related papers (2022-06-01T17:02:40Z) - Shape Prior Non-Uniform Sampling Guided Real-time Stereo 3D Object
Detection [59.765645791588454]
Recently introduced RTS3D builds an efficient 4D Feature-Consistency Embedding space for the intermediate representation of object without depth supervision.
We propose a shape prior non-uniform sampling strategy that performs dense sampling in outer region and sparse sampling in inner region.
Our proposed method has 2.57% improvement on AP3d almost without extra network parameters.
arXiv Detail & Related papers (2021-06-18T09:14:55Z) - Stereo RGB and Deeper LIDAR Based Network for 3D Object Detection [40.34710686994996]
3D object detection has become an emerging task in autonomous driving scenarios.
Previous works process 3D point clouds using either projection-based or voxel-based models.
We propose the Stereo RGB and Deeper LIDAR framework which can utilize semantic and spatial information simultaneously.
arXiv Detail & Related papers (2020-06-09T11:19:24Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.