Det6D: A Ground-Aware Full-Pose 3D Object Detector for Improving Terrain
Robustness
- URL: http://arxiv.org/abs/2207.09412v1
- Date: Tue, 19 Jul 2022 17:12:48 GMT
- Title: Det6D: A Ground-Aware Full-Pose 3D Object Detector for Improving Terrain
Robustness
- Authors: Junyuan Ouyang, Haoyao Chen
- Abstract summary: We propose Det6D, the first full-degree-of-freedom 3D object detector without spatial and postural limitations.
To predict full-degree poses, including pitch and roll, we design a ground-aware orientation branch.
Experiments on various datasets demonstrate the effectiveness and robustness of our method in different terrains.
- Score: 1.4620086904601473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate 3D object detection with LiDAR is critical for autonomous driving.
Existing research is all based on the flat-world assumption. However, the
actual road can be complex with steep sections, which breaks the premise.
Current methods suffer from performance degradation in this case due to
difficulty correctly detecting objects on sloped terrain. In this work, we
propose Det6D, the first full-degree-of-freedom 3D object detector without
spatial and postural limitations, to improve terrain robustness. We choose the
point-based framework by founding their capability of detecting objects in the
entire spatial range. To predict full-degree poses, including pitch and roll,
we design a ground-aware orientation branch that leverages the local ground
constraints. Given the difficulty of long-tail non-flat scene data collection
and 6D pose annotation, we present Slope-Aug, a data augmentation method for
synthesizing non-flat terrain from existing datasets recorded in flat scenes.
Experiments on various datasets demonstrate the effectiveness and robustness of
our method in different terrains. We further conducted an extended experiment
to explore how the network predicts the two extra poses. The proposed modules
are plug-and-play for existing point-based frameworks. The code is available at
https://github.com/HITSZ-NRSL/De6D.
Related papers
- PatchContrast: Self-Supervised Pre-training for 3D Object Detection [14.603858163158625]
We introduce PatchContrast, a novel self-supervised point cloud pre-training framework for 3D object detection.
We show that our method outperforms existing state-of-the-art models on three commonly-used 3D detection datasets.
arXiv Detail & Related papers (2023-08-14T07:45:54Z) - FocalFormer3D : Focusing on Hard Instance for 3D Object Detection [97.56185033488168]
False negatives (FN) in 3D object detection can lead to potentially dangerous situations in autonomous driving.
In this work, we propose Hard Instance Probing (HIP), a general pipeline that identifies textitFN in a multi-stage manner.
We instantiate this method as FocalFormer3D, a simple yet effective detector that excels at excavating difficult objects.
arXiv Detail & Related papers (2023-08-08T20:06:12Z) - DetZero: Rethinking Offboard 3D Object Detection with Long-term
Sequential Point Clouds [55.755450273390004]
Existing offboard 3D detectors always follow a modular pipeline design to take advantage of unlimited sequential point clouds.
We have found that the full potential of offboard 3D detectors is not explored mainly due to two reasons: (1) the onboard multi-object tracker cannot generate sufficient complete object trajectories, and (2) the motion state of objects poses an inevitable challenge for the object-centric refining stage.
To tackle these problems, we propose a novel paradigm of offboard 3D object detection, named DetZero.
arXiv Detail & Related papers (2023-06-09T16:42:00Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Embracing Single Stride 3D Object Detector with Sparse Transformer [63.179720817019096]
In LiDAR-based 3D object detection for autonomous driving, the ratio of the object size to input scene size is significantly smaller compared to 2D detection cases.
Many 3D detectors directly follow the common practice of 2D detectors, which downsample the feature maps even after quantizing the point clouds.
We propose Single-stride Sparse Transformer (SST) to maintain the original resolution from the beginning to the end of the network.
arXiv Detail & Related papers (2021-12-13T02:12:02Z) - Weakly Supervised Training of Monocular 3D Object Detectors Using Wide
Baseline Multi-view Traffic Camera Data [19.63193201107591]
7DoF prediction of vehicles at an intersection is an important task for assessing potential conflicts between road users.
We develop an approach using a weakly supervised method of fine tuning 3D object detectors for traffic observation cameras.
Our method achieves vehicle 7DoF pose prediction accuracy on our dataset comparable to the top performing monocular 3D object detectors on autonomous vehicle datasets.
arXiv Detail & Related papers (2021-10-21T08:26:48Z) - Delving into Localization Errors for Monocular 3D Object Detection [85.77319416168362]
Estimating 3D bounding boxes from monocular images is an essential component in autonomous driving.
In this work, we quantify the impact introduced by each sub-task and find the localization error' is the vital factor in restricting monocular 3D detection.
arXiv Detail & Related papers (2021-03-30T10:38:01Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z) - Boundary-Aware Dense Feature Indicator for Single-Stage 3D Object
Detection from Point Clouds [32.916690488130506]
We propose a universal module that helps 3D detectors focus on the densest region of the point clouds in a boundary-aware manner.
Experiments on KITTI dataset show that DENFI improves the performance of the baseline single-stage detector remarkably.
arXiv Detail & Related papers (2020-04-01T01:21:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.