What You See Is What You Detect: Towards better Object Densification in
3D detection
- URL: http://arxiv.org/abs/2310.17842v2
- Date: Tue, 14 Nov 2023 23:03:35 GMT
- Title: What You See Is What You Detect: Towards better Object Densification in
3D detection
- Authors: Tianran Liu, Zeping Zhang, Morteza Mousa Pasandi, Robert Laganiere
- Abstract summary: The widely-used full-shape completion approach actually leads to a higher error-upper bound especially for far away objects and small objects like pedestrians.
We introduce a visible part completion method that requires only 11.3% of the prediction points that previous methods generate.
To recover the dense representation, we propose a mesh-deformation-based method to augment the point set associated with visible foreground objects.
- Score: 2.3436632098950456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have demonstrated the importance of object completion in 3D
Perception from Lidar signal. Several methods have been proposed in which
modules were used to densify the point clouds produced by laser scanners,
leading to better recall and more accurate results. Pursuing in that direction,
we present, in this work, a counter-intuitive perspective: the widely-used
full-shape completion approach actually leads to a higher error-upper bound
especially for far away objects and small objects like pedestrians. Based on
this observation, we introduce a visible part completion method that requires
only 11.3\% of the prediction points that previous methods generate. To recover
the dense representation, we propose a mesh-deformation-based method to augment
the point set associated with visible foreground objects. Considering that our
approach focuses only on the visible part of the foreground objects to achieve
accurate 3D detection, we named our method What You See Is What You Detect
(WYSIWYD). Our proposed method is thus a detector-independent model that
consists of 2 parts: an Intra-Frustum Segmentation Transformer (IFST) and a
Mesh Depth Completion Network(MDCNet) that predicts the foreground depth from
mesh deformation. This way, our model does not require the time-consuming
full-depth completion task used by most pseudo-lidar-based methods. Our
experimental evaluation shows that our approach can provide up to 12.2\%
performance improvements over most of the public baseline models on the KITTI
and NuScenes dataset bringing the state-of-the-art to a new level. The codes
will be available at
\textcolor[RGB]{0,0,255}{\url{{https://github.com/Orbis36/WYSIWYD}}
Related papers
- FoundPose: Unseen Object Pose Estimation with Foundation Features [11.32559845631345]
FoundPose is a model-based method for 6D pose estimation of unseen objects from a single RGB image.
The method can quickly onboard new objects using their 3D models without requiring any object- or task-specific training.
arXiv Detail & Related papers (2023-11-30T18:52:29Z) - Weakly Supervised Monocular 3D Object Detection using Multi-View
Projection and Direction Consistency [78.76508318592552]
Monocular 3D object detection has become a mainstream approach in automatic driving for its easy application.
Most current methods still rely on 3D point cloud data for labeling the ground truths used in the training phase.
We propose a new weakly supervised monocular 3D objection detection method, which can train the model with only 2D labels marked on images.
arXiv Detail & Related papers (2023-03-15T15:14:00Z) - Learning Object-level Point Augmentor for Semi-supervised 3D Object
Detection [85.170578641966]
We propose an object-level point augmentor (OPA) that performs local transformations for semi-supervised 3D object detection.
In this way, the resultant augmentor is derived to emphasize object instances rather than irrelevant backgrounds.
Experiments on the ScanNet and SUN RGB-D datasets show that the proposed OPA performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2022-12-19T06:56:14Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - RBGNet: Ray-based Grouping for 3D Object Detection [104.98776095895641]
We propose the RBGNet framework, a voting-based 3D detector for accurate 3D object detection from point clouds.
We propose a ray-based feature grouping module, which aggregates the point-wise features on object surfaces using a group of determined rays.
Our model achieves state-of-the-art 3D detection performance on ScanNet V2 and SUN RGB-D with remarkable performance gains.
arXiv Detail & Related papers (2022-04-05T14:42:57Z) - FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection [3.330229314824913]
We present FCAF3D - a first-in-class fully convolutional anchor-free indoor 3D object detection method.
It is a simple yet effective method that uses a voxel representation of a point cloud and processes voxels with sparse convolutions.
It can handle large-scale scenes with minimal runtime through a single fully convolutional feed-forward pass.
arXiv Detail & Related papers (2021-12-01T07:28:52Z) - Objects are Different: Flexible Monocular 3D Object Detection [87.82253067302561]
We propose a flexible framework for monocular 3D object detection which explicitly decouples the truncated objects and adaptively combines multiple approaches for object depth estimation.
Experiments demonstrate that our method outperforms the state-of-the-art method by relatively 27% for the moderate level and 30% for the hard level in the test set of KITTI benchmark.
arXiv Detail & Related papers (2021-04-06T07:01:28Z) - Monocular 3D Detection with Geometric Constraints Embedding and
Semi-supervised Training [3.8073142980733]
We propose a novel framework for monocular 3D objects detection using only RGB images, called KM3D-Net.
We design a fully convolutional model to predict object keypoints, dimension, and orientation, and then combine these estimations with perspective geometry constraints to compute position attribute.
arXiv Detail & Related papers (2020-09-02T00:51:51Z) - Generative Sparse Detection Networks for 3D Single-shot Object Detection [43.91336826079574]
3D object detection has been widely studied due to its potential applicability to many promising areas such as robotics and augmented reality.
Yet, the sparse nature of the 3D data poses unique challenges to this task.
We propose Generative Sparse Detection Network (GSDN), a fully-convolutional single-shot sparse detection network.
arXiv Detail & Related papers (2020-06-22T15:54:24Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.