Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object
Detection
- URL: http://arxiv.org/abs/2204.09804v4
- Date: Fri, 24 Mar 2023 06:22:03 GMT
- Title: Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object
Detection
- Authors: Tianya Zhang, Yi Ge, Peter J. Jin
- Abstract summary: Background modeling is widely used for intelligent surveillance systems to detect moving targets by subtracting the static background components.
Most roadside LiDAR object detection methods filter out foreground points by comparing new data points to pre-trained background references.
In this paper, we transform the raw LiDAR data into a structured representation based on the elevation and azimuth value of each LiDAR point.
The proposed method was compared against two state-of-the-art roadside LiDAR background models, computer vision benchmark, and deep learning baselines, evaluated at point, object, and path levels under heavy traffic and challenging weather.
- Score: 0.5156484100374059
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background modeling is widely used for intelligent surveillance systems to
detect moving targets by subtracting the static background components. Most
roadside LiDAR object detection methods filter out foreground points by
comparing new data points to pre-trained background references based on
descriptive statistics over many frames (e.g., voxel density, number of
neighbors, maximum distance). However, these solutions are inefficient under
heavy traffic, and parameter values are hard to transfer from one scenario to
another. In early studies, the probabilistic background modeling methods widely
used for the video-based system were considered unsuitable for roadside LiDAR
surveillance systems due to the sparse and unstructured point cloud data. In
this paper, the raw LiDAR data were transformed into a structured
representation based on the elevation and azimuth value of each LiDAR point.
With this high-order tensor representation, we break the barrier to allow
efficient high-dimensional multivariate analysis for roadside LiDAR background
modeling. The Bayesian Nonparametric (BNP) approach integrates the intensity
value and 3D measurements to exploit the measurement data using 3D and
intensity info entirely. The proposed method was compared against two
state-of-the-art roadside LiDAR background models, computer vision benchmark,
and deep learning baselines, evaluated at point, object, and path levels under
heavy traffic and challenging weather. This multimodal Weighted Bayesian
Gaussian Mixture Model (GMM) can handle dynamic backgrounds with noisy
measurements and substantially enhances the infrastructure-based LiDAR object
detection, whereby various 3D modeling for smart city applications could be
created.
Related papers
- Approaching Outside: Scaling Unsupervised 3D Object Detection from 2D Scene [22.297964850282177]
We propose LiDAR-2D Self-paced Learning (LiSe) for unsupervised 3D detection.
RGB images serve as a valuable complement to LiDAR data, offering precise 2D localization cues.
Our framework devises a self-paced learning pipeline that incorporates adaptive sampling and weak model aggregation strategies.
arXiv Detail & Related papers (2024-07-11T14:58:49Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models [1.1965844936801797]
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots.
We present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds.
Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks.
arXiv Detail & Related papers (2023-09-17T12:26:57Z) - Optimal Transport for Change Detection on LiDAR Point Clouds [16.552050876277242]
Unsupervised change detection between airborne LiDAR data points is difficult due to unmatching spatial support and noise from the acquisition system.
We propose an unsupervised approach based on the computation of the transport of 3D LiDAR points over two temporal supports.
Our method allows for unsupervised multi-class classification and outperforms the previous state-of-the-art unsupervised approaches by a significant margin.
arXiv Detail & Related papers (2023-02-14T13:08:07Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Dense Voxel Fusion for 3D Object Detection [10.717415797194896]
Voxel Fusion (DVF) is a sequential fusion method that generates multi-scale dense voxel feature representations.
We train directly with ground truth 2D bounding box labels, avoiding noisy, detector-specific, 2D predictions.
We show that our proposed multi-modal training strategy results in better generalization compared to training using erroneous 2D predictions.
arXiv Detail & Related papers (2022-03-02T04:51:31Z) - MonoDistill: Learning Spatial Features for Monocular 3D Object Detection [80.74622486604886]
We propose a simple and effective scheme to introduce the spatial information from LiDAR signals to the monocular 3D detectors.
We use the resulting data to train a 3D detector with the same architecture as the baseline model.
Experimental results show that the proposed method can significantly boost the performance of the baseline model.
arXiv Detail & Related papers (2022-01-26T09:21:41Z) - Roadside Lidar Vehicle Detection and Tracking Using Range And Intensity
Background Subtraction [0.0]
We present the solution of roadside LiDAR object detection using a combination of two unsupervised learning algorithms.
The method was validated against a commercial traffic data collection platform.
arXiv Detail & Related papers (2022-01-13T00:54:43Z) - Depth-conditioned Dynamic Message Propagation for Monocular 3D Object
Detection [86.25022248968908]
We learn context- and depth-aware feature representation to solve the problem of monocular 3D object detection.
We show state-of-the-art results among the monocular-based approaches on the KITTI benchmark dataset.
arXiv Detail & Related papers (2021-03-30T16:20:24Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.