A Fully Interpretable Statistical Approach for Roadside LiDAR Background Subtraction
- URL: http://arxiv.org/abs/2510.22390v1
- Date: Sat, 25 Oct 2025 18:18:10 GMT
- Title: A Fully Interpretable Statistical Approach for Roadside LiDAR Background Subtraction
- Authors: Aitor Iglesias, Nerea Aranjuelo, Patricia Javierre, Ainhoa Menendez, Ignacio Arganda-Carreras, Marcos Nieto,
- Abstract summary: We present a fully interpretable and flexible statistical method for background subtraction in roadside LiDAR data.<n>The method supports diverse LiDAR types, including multiline 360 degree and micro-electro-mechanical systems (MEMS) sensors.<n>It outperforms state-of-the-art techniques in accuracy and flexibility, even with minimal background data.
- Score: 3.9354551232038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a fully interpretable and flexible statistical method for background subtraction in roadside LiDAR data, aimed at enhancing infrastructure-based perception in automated driving. Our approach introduces both a Gaussian distribution grid (GDG), which models the spatial statistics of the background using background-only scans, and a filtering algorithm that uses this representation to classify LiDAR points as foreground or background. The method supports diverse LiDAR types, including multiline 360 degree and micro-electro-mechanical systems (MEMS) sensors, and adapts to various configurations. Evaluated on the publicly available RCooper dataset, it outperforms state-of-the-art techniques in accuracy and flexibility, even with minimal background data. Its efficient implementation ensures reliable performance on low-resource hardware, enabling scalable real-world deployment.
Related papers
- Which LiDAR scanning pattern is better for roadside perception: Repetitive or Non-repetitive? [14.082785631325928]
"InfraLiDARs' Benchmark" is a novel dataset meticulously collected in the CARLA simulation environment using concurrently operating infrastructure-based LiDARs.<n>Our findings reveal that non-repetitive scanning LiDAR and the 128-line repetitive LiDAR were found to exhibit comparable detection performance across various scenarios.
arXiv Detail & Related papers (2025-10-28T20:50:56Z) - Scaling Up Occupancy-centric Driving Scene Generation: Dataset and Method [54.461213497603154]
Occupancy-centric methods have recently achieved state-of-the-art results by offering consistent conditioning across frames and modalities.<n>Nuplan-Occ is the largest occupancy dataset to date, constructed from the widely used Nuplan benchmark.<n>We develop a unified framework that jointly synthesizes high-quality occupancy, multi-view videos, and LiDAR point clouds.
arXiv Detail & Related papers (2025-10-27T03:52:45Z) - End-to-End Crop Row Navigation via LiDAR-Based Deep Reinforcement Learning [0.4588028371034407]
We present an end-to-end learning-based navigation system that maps raw 3D LiDAR data directly to control commands using a deep reinforcement learning policy trained entirely in simulation.<n>Our method includes a voxel-based downsampling strategy that reduces LiDAR input size by 95.83%, enabling efficient policy learning without relying on labeled datasets or manually designed control interfaces.
arXiv Detail & Related papers (2025-09-23T03:56:10Z) - LIR-LIVO: A Lightweight,Robust LiDAR/Vision/Inertial Odometry with Illumination-Resilient Deep Features [8.095827028713684]
The proposed method leverages deep learning-based illumination-resilient features and LiDAR-Inertial-Visual Odometry (LIVO)<n>LIR-LIVO achieves state-of-the-art (SOTA) accuracy and robustness with low computational cost.
arXiv Detail & Related papers (2025-02-12T05:28:10Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [53.58528891081709]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.<n>Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.<n>This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - Gait Recognition in Large-scale Free Environment via Single LiDAR [35.684257181154905]
LiDAR's ability to capture depth makes it pivotal for robotic perception and holds promise for real-world gait recognition.
We present the Hierarchical Multi-representation Feature Interaction Network (HMRNet) for robust gait recognition.
To facilitate LiDAR-based gait recognition research, we introduce FreeGait, a comprehensive gait dataset from large-scale, unconstrained settings.
arXiv Detail & Related papers (2022-11-22T16:05:58Z) - Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object
Detection [0.5156484100374059]
Background modeling is widely used for intelligent surveillance systems to detect moving targets by subtracting the static background components.
Most roadside LiDAR object detection methods filter out foreground points by comparing new data points to pre-trained background references.
In this paper, we transform the raw LiDAR data into a structured representation based on the elevation and azimuth value of each LiDAR point.
The proposed method was compared against two state-of-the-art roadside LiDAR background models, computer vision benchmark, and deep learning baselines, evaluated at point, object, and path levels under heavy traffic and challenging weather.
arXiv Detail & Related papers (2022-04-20T22:48:05Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Unsupervised Domain Adaptation for LiDAR Panoptic Segmentation [5.745037250837124]
Unsupervised Domain Adaptation (UDA) techniques are essential to fill this domain gap.
We propose AdaptLPS, a novel UDA approach for LiDAR panoptic segmentation.
We show that AdaptLPS outperforms existing UDA approaches by up to 6.41 pp in terms of the PQ score.
arXiv Detail & Related papers (2021-09-30T17:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.