Roadside Lidar Vehicle Detection and Tracking Using Range And Intensity
Background Subtraction
- URL: http://arxiv.org/abs/2201.04756v1
- Date: Thu, 13 Jan 2022 00:54:43 GMT
- Title: Roadside Lidar Vehicle Detection and Tracking Using Range And Intensity
Background Subtraction
- Authors: Tianya Zhang and Peter J. Jin
- Abstract summary: We present the solution of roadside LiDAR object detection using a combination of two unsupervised learning algorithms.
The method was validated against a commercial traffic data collection platform.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present the solution of roadside LiDAR object detection
using a combination of two unsupervised learning algorithms. The 3D point
clouds data are firstly converted into spherical coordinates and filled into
the azimuth grid matrix using a hash function. After that, the raw LiDAR data
were rearranged into spatial-temporal data structures to store the information
of range, azimuth, and intensity. Dynamic Mode Decomposition method is applied
for decomposing the point cloud data into low-rank backgrounds and sparse
foregrounds based on intensity channel pattern recognition. The Triangle
Algorithm automatically finds the dividing value to separate the moving targets
from static background according to range information. After intensity and
range background subtraction, the foreground moving objects will be detected
using a density-based detector and encoded into the state-space model for
tracking. The output of the proposed model includes vehicle trajectories that
can enable many mobility and safety applications. The method was validated
against a commercial traffic data collection platform and demonstrated to be an
efficient and reliable solution for infrastructure LiDAR object detection. In
contrast to the previous methods that process directly on the scattered and
discrete point clouds, the proposed method can establish the less sophisticated
linear relationship of the 3D measurement data, which captures the
spatial-temporal structure that we often desire.
Related papers
- DSLO: Deep Sequence LiDAR Odometry Based on Inconsistent Spatio-temporal Propagation [66.8732965660931]
paper introduces a 3D point cloud sequence learning model based on inconsistent-temporal propagation for LiDAR odometry DSLO.
It consists of a pyramid structure with a sequential pose module, a hierarchical pose refinement module, and a temporal feature propagation module.
arXiv Detail & Related papers (2024-09-01T15:12:48Z) - Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal
Feature Augmentation [7.364627166256136]
This paper presents a novel framework for robust 3D object detection from point clouds via cross-modal hallucination.
We introduce multiple alignments on both spatial and feature levels to achieve simultaneous backbone refinement and hallucination generation.
Experiments on the View-of-Delft dataset show that our proposed method outperforms the state-of-the-art (SOTA) methods for both radar and LiDAR object detection.
arXiv Detail & Related papers (2023-09-29T15:46:59Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving
Object Segmentation [23.666607237164186]
We propose a novel deep neural network exploiting both spatial-temporal information and different representation modalities of LiDAR scans to improve LiDAR-MOS performance.
Specifically, we first use a range image-based dual-branch structure to separately deal with spatial and temporal information.
We also use a point refinement module via 3D sparse convolution to fuse the information from both LiDAR range image and point cloud representations.
arXiv Detail & Related papers (2022-07-05T17:59:17Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object
Detection [0.5156484100374059]
Background modeling is widely used for intelligent surveillance systems to detect moving targets by subtracting the static background components.
Most roadside LiDAR object detection methods filter out foreground points by comparing new data points to pre-trained background references.
In this paper, we transform the raw LiDAR data into a structured representation based on the elevation and azimuth value of each LiDAR point.
The proposed method was compared against two state-of-the-art roadside LiDAR background models, computer vision benchmark, and deep learning baselines, evaluated at point, object, and path levels under heavy traffic and challenging weather.
arXiv Detail & Related papers (2022-04-20T22:48:05Z) - A two-stage data association approach for 3D Multi-object Tracking [0.0]
We adapt a two-stage dataassociation method which was successful in image-based tracking to the 3D setting.
Our method outperforms the baseline using one-stagebipartie matching for data association by achieving 0.587 AMOTA in NuScenes validation set.
arXiv Detail & Related papers (2021-01-21T15:50:17Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z) - Range Conditioned Dilated Convolutions for Scale Invariant 3D Object
Detection [41.59388513615775]
This paper presents a novel 3D object detection framework that processes LiDAR data directly on its native representation: range images.
Benefiting from the compactness of range images, 2D convolutions can efficiently process dense LiDAR data of a scene.
arXiv Detail & Related papers (2020-05-20T09:24:43Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.