3D Harmonic Loss: Towards Task-consistent and Time-friendly 3D Object
Detection on Edge for Intelligent Transportation System
- URL: http://arxiv.org/abs/2211.03407v1
- Date: Mon, 7 Nov 2022 10:11:48 GMT
- Title: 3D Harmonic Loss: Towards Task-consistent and Time-friendly 3D Object
Detection on Edge for Intelligent Transportation System
- Authors: Haolin Zhang, M S Mekala, Zulkar Nain, Ju H. Park, Ho-Youl Jung
- Abstract summary: We propose a 3D harmonic loss function to relieve the pointcloud based inconsistent predictions.
Our proposed method considerably improves the performance than benchmark models.
Our code is open-source and publicly available.
- Score: 28.55894241049706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge computing-based 3D perception has received attention in intelligent
transportation systems (ITS) because real-time monitoring of traffic candidates
potentially strengthens Vehicle-to-Everything (V2X) orchestration. Thanks to
the capability of precisely measuring the depth information on surroundings
from LiDAR, the increasing studies focus on lidar-based 3D detection, which
significantly promotes the development of 3D perception. Few methods met the
real-time requirement of edge deployment because of high computation-intensive
operations. Moreover, an inconsistency problem of object detection remains
uncovered in the pointcloud domain due to large sparsity. This paper thoroughly
analyses this problem, comprehensively roused by recent works on determining
inconsistency problems in the image specialisation. Therefore, we proposed a 3D
harmonic loss function to relieve the pointcloud based inconsistent
predictions. Moreover, the feasibility of 3D harmonic loss is demonstrated from
a mathematical optimization perspective. The KITTI dataset and DAIR-V2X-I
dataset are used for simulations, and our proposed method considerably improves
the performance than benchmark models. Further, the simulative deployment on an
edge device (Jetson Xavier TX) validates our proposed model's efficiency. Our
code is open-source and publicly available.
Related papers
- DM3D: Distortion-Minimized Weight Pruning for Lossless 3D Object Detection [42.07920565812081]
We propose a novel post-training weight pruning scheme for 3D object detection.
It determines redundant parameters in the pretrained model that lead to minimal distortion in both locality and confidence.
This framework aims to minimize detection distortion of network output to maximally maintain detection precision.
arXiv Detail & Related papers (2024-07-02T09:33:32Z) - Empowering Urban Traffic Management: Elevated 3D LiDAR for Data Collection and Advanced Object Detection Analysis [4.831084635928491]
This paper presents a novel framework that transforms the detection and analysis of 3D objects in traffic scenarios by utilizing the power of elevated LiDAR sensors.
Due to the limitation in obtaining real-world traffic datasets, we utilize the simulator to generate 3D point cloud for specific scenarios.
arXiv Detail & Related papers (2024-05-21T21:12:09Z) - BSH-Det3D: Improving 3D Object Detection with BEV Shape Heatmap [10.060577111347152]
We propose a novel LiDAR-based 3D object detection model named BSH-Det3D.
It applies an effective way to enhance spatial features by estimating complete shapes from a bird's eye view.
Experiments on the KITTI benchmark achieve state-of-the-art (SOTA) performance in terms of accuracy and speed.
arXiv Detail & Related papers (2023-03-03T15:13:11Z) - Normal Transformer: Extracting Surface Geometry from LiDAR Points
Enhanced by Visual Semantics [6.516912796655748]
This paper presents a technique for estimating the normal from 3D point clouds and 2D colour images.
We have developed a transformer neural network that learns to utilise the hybrid information of visual semantic and 3D geometric data.
arXiv Detail & Related papers (2022-11-19T03:55:09Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Semi-supervised 3D Object Detection via Adaptive Pseudo-Labeling [18.209409027211404]
3D object detection is an important task in computer vision.
Most existing methods require a large number of high-quality 3D annotations, which are expensive to collect.
We propose a novel semi-supervised framework based on pseudo-labeling for outdoor 3D object detection tasks.
arXiv Detail & Related papers (2021-08-15T02:58:43Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.