Feature-based model selection for object detection from point cloud data
- URL: http://arxiv.org/abs/2209.12419v1
- Date: Mon, 26 Sep 2022 05:03:59 GMT
- Title: Feature-based model selection for object detection from point cloud data
- Authors: Kairi Tokuda, Ryoichi Shinkuma, Takehiro Sato, Eiji Oki
- Abstract summary: In smart monitoring, object detection from point cloud data is implemented for detecting moving objects such as vehicles and pedestrians.
We propose a feature-based model selection framework that creates various deep learning models by using multiple DL methods.
It selects the most suitable DL model for the object detection task in accordance with the features of the point cloud data acquired in the real environment.
- Score: 5.887969742827488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smart monitoring using three-dimensional (3D) image sensors has been
attracting attention in the context of smart cities. In smart monitoring,
object detection from point cloud data acquired by 3D image sensors is
implemented for detecting moving objects such as vehicles and pedestrians to
ensure safety on the road. However, the features of point cloud data are
diversified due to the characteristics of light detection and ranging (LIDAR)
units used as 3D image sensors or the install position of the 3D image sensors.
Although a variety of deep learning (DL) models for object detection from point
cloud data have been studied to date, no research has considered how to use
multiple DL models in accordance with the features of the point cloud data. In
this work, we propose a feature-based model selection framework that creates
various DL models by using multiple DL methods and by utilizing training data
with pseudo incompleteness generated by two artificial techniques: sampling and
noise adding. It selects the most suitable DL model for the object detection
task in accordance with the features of the point cloud data acquired in the
real environment. To demonstrate the effectiveness of the proposed framework,
we compare the performance of multiple DL models using benchmark datasets
created from the KITTI dataset and present example results of object detection
obtained through a real outdoor experiment. Depending on the situation, the
detection accuracy varies up to 32% between DL models, which confirms the
importance of selecting an appropriate DL model according to the situation.
Related papers
- VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - Reviewing 3D Object Detectors in the Context of High-Resolution 3+1D
Radar [0.7279730418361995]
High-resolution imaging 4D (3+1D) radar sensors have deep learning-based radar perception research.
We investigate deep learning-based models operating on radar point clouds for 3D object detection.
arXiv Detail & Related papers (2023-08-10T10:10:43Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Aug3D-RPN: Improving Monocular 3D Object Detection by Synthetic Images
with Virtual Depth [64.29043589521308]
We propose a rendering module to augment the training data by synthesizing images with virtual-depths.
The rendering module takes as input the RGB image and its corresponding sparse depth image, outputs a variety of photo-realistic synthetic images.
Besides, we introduce an auxiliary module to improve the detection model by jointly optimizing it through a depth estimation task.
arXiv Detail & Related papers (2021-07-28T11:00:47Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z) - Boundary-Aware Dense Feature Indicator for Single-Stage 3D Object
Detection from Point Clouds [32.916690488130506]
We propose a universal module that helps 3D detectors focus on the densest region of the point clouds in a boundary-aware manner.
Experiments on KITTI dataset show that DENFI improves the performance of the baseline single-stage detector remarkably.
arXiv Detail & Related papers (2020-04-01T01:21:23Z) - 3D Object Detection From LiDAR Data Using Distance Dependent Feature
Extraction [7.04185696830272]
This work proposes an improvement for 3D object detectors by taking into account the properties of LiDAR point clouds over distance.
Results show that training separate networks for close-range and long-range objects boosts performance for all KITTI benchmark difficulties.
arXiv Detail & Related papers (2020-03-02T13:16:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.