Efficient LiDAR Reflectance Compression via Scanning Serialization
- URL: http://arxiv.org/abs/2505.09433v2
- Date: Tue, 27 May 2025 05:54:18 GMT
- Title: Efficient LiDAR Reflectance Compression via Scanning Serialization
- Authors: Jiahao Zhu, Kang You, Dandan Ding, Zhan Ma,
- Abstract summary: SerLiC is a serialization-based neural compression framework for reflectance analysis.<n>It attains over 2x volume reduction against the original reflectance data.<n>A lightweight version of SerLiC achieves > 10 fps (frames per second) with just 111K parameters.
- Score: 19.257711579886006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reflectance attributes in LiDAR point clouds provide essential information for downstream tasks but remain underexplored in neural compression methods. To address this, we introduce SerLiC, a serialization-based neural compression framework to fully exploit the intrinsic characteristics of LiDAR reflectance. SerLiC first transforms 3D LiDAR point clouds into 1D sequences via scan-order serialization, offering a device-centric perspective for reflectance analysis. Each point is then tokenized into a contextual representation comprising its sensor scanning index, radial distance, and prior reflectance, for effective dependencies exploration. For efficient sequential modeling, Mamba is incorporated with a dual parallelization scheme, enabling simultaneous autoregressive dependency capture and fast processing. Extensive experiments demonstrate that SerLiC attains over 2x volume reduction against the original reflectance data, outperforming the state-of-the-art method by up to 22% reduction of compressed bits while using only 2% of its parameters. Moreover, a lightweight version of SerLiC achieves > 10 fps (frames per second) with just 111K parameters, which is attractive for real-world applications.
Related papers
- Adaptive LiDAR Scanning: Harnessing Temporal Cues for Efficient 3D Object Detection via Multi-Modal Fusion [11.351728925952193]
Conventional LiDAR sensors perform dense, stateless scans, ignoring the strong temporal continuity in real-world scenes.<n>We propose a predictive, history-aware adaptive scanning framework that anticipates informative regions of interest based on past observations.<n>Our method significantly reduces unnecessary data acquisition by concentrating dense LiDAR scanning only within these ROIs and sparsely sampling elsewhere.
arXiv Detail & Related papers (2025-08-03T03:20:36Z) - RE-TRIP : Reflectivity Instance Augmented Triangle Descriptor for 3D Place Recognition [14.095215136905553]
We propose a novel descriptor for 3D Place Recognition, named RE-TRIP.<n>This new descriptor leverages both geometric measurements and reflectivity to enhance robustness.<n>We conduct a series of experiments to demonstrate the effectiveness of RE-TRIP.
arXiv Detail & Related papers (2025-05-22T03:11:30Z) - Targetless 6DoF Calibration of LiDAR and 2D Scanning Radar Based on Cylindrical Occupancy [8.895838973148452]
LiRaCo is a targetless calibration approach for the extrinsic 6DoF calibration of LiDAR and Radar sensors.<n>LiRaCo leverages a spatial occupancy consistency between LiDAR point clouds and Radar scans in a common cylindrical representation.<n>A cost function involving extrinsic calibration parameters is formulated based on the spatial overlap of 3D grids and LiDAR points.
arXiv Detail & Related papers (2025-03-21T10:09:04Z) - LiDAR-RT: Gaussian-based Ray Tracing for Dynamic LiDAR Re-simulation [31.79143254487969]
LiDAR-RT is a novel framework that supports real-time, physically accurate LiDAR re-simulation for driving scenes.<n>Our primary contribution is the development of an efficient and effective rendering pipeline.<n>Our framework supports realistic rendering with flexible scene editing operations and various sensor configurations.
arXiv Detail & Related papers (2024-12-19T18:58:36Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Neural LiDAR Fields for Novel View Synthesis [80.45307792404685]
We present Neural Fields for LiDAR (NFL), a method to optimise a neural field scene representation from LiDAR measurements.
NFL combines the rendering power of neural fields with a detailed, physically motivated model of the LiDAR sensing process.
We show that the improved realism of the synthesized views narrows the domain gap to real scans and translates to better registration and semantic segmentation performance.
arXiv Detail & Related papers (2023-05-02T17:55:38Z) - Gait Recognition in Large-scale Free Environment via Single LiDAR [35.684257181154905]
LiDAR's ability to capture depth makes it pivotal for robotic perception and holds promise for real-world gait recognition.
We present the Hierarchical Multi-representation Feature Interaction Network (HMRNet) for robust gait recognition.
To facilitate LiDAR-based gait recognition research, we introduce FreeGait, a comprehensive gait dataset from large-scale, unconstrained settings.
arXiv Detail & Related papers (2022-11-22T16:05:58Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Two-Stage Single Image Reflection Removal with Reflection-Aware Guidance [78.34235841168031]
We present a novel two-stage network with reflection-aware guidance (RAGNet) for single image reflection removal (SIRR)
RAG can be used (i) to mitigate the effect of reflection from the observation, and (ii) to generate mask in partial convolution for mitigating the effect of deviating from linear combination hypothesis.
Experiments on five commonly used datasets demonstrate the quantitative and qualitative superiority of our RAGNet in comparison to the state-of-the-art SIRR methods.
arXiv Detail & Related papers (2020-12-02T03:14:57Z) - Scan-based Semantic Segmentation of LiDAR Point Clouds: An Experimental
Study [2.6205925938720833]
State of the art methods use deep neural networks to predict semantic classes for each point in a LiDAR scan.
A powerful and efficient way to process LiDAR measurements is to use two-dimensional, image-like projections.
We demonstrate various techniques to boost the performance and to improve runtime as well as memory constraints.
arXiv Detail & Related papers (2020-04-06T11:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.