A Comprehensive Survey on Deep Learning-Based LiDAR Super-Resolution for Autonomous Driving
- URL: http://arxiv.org/abs/2602.15904v1
- Date: Sun, 15 Feb 2026 22:34:28 GMT
- Title: A Comprehensive Survey on Deep Learning-Based LiDAR Super-Resolution for Autonomous Driving
- Authors: June Moh Goo, Zichao Zeng, Jan Boehm,
- Abstract summary: This paper presents the first comprehensive survey of LiDAR super-resolution methods for autonomous driving.<n>We organize existing approaches into four categories: CNN-based architectures, model-based deep unrolling, implicit representation methods, and Transformer and Mamba-based approaches.<n>Current trends include the adoption of range image representation for efficient processing, extreme model compression and the development of resolution-flexible architectures.
- Score: 0.4078247440919472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR sensors are often considered essential for autonomous driving, but high-resolution sensors remain expensive while affordable low-resolution sensors produce sparse point clouds that miss critical details. LiDAR super-resolution addresses this challenge by using deep learning to enhance sparse point clouds, bridging the gap between different sensor types and enabling cross-sensor compatibility in real-world deployments. This paper presents the first comprehensive survey of LiDAR super-resolution methods for autonomous driving. Despite the importance of practical deployment, no systematic review has been conducted until now. We organize existing approaches into four categories: CNN-based architectures, model-based deep unrolling, implicit representation methods, and Transformer and Mamba-based approaches. We establish fundamental concepts including data representations, problem formulation, benchmark datasets and evaluation metrics. Current trends include the adoption of range image representation for efficient processing, extreme model compression and the development of resolution-flexible architectures. Recent research prioritizes real-time inference and cross-sensor generalization for practical deployment. We conclude by identifying open challenges and future research directions for advancing LiDAR super-resolution technology.
Related papers
- Forging Spatial Intelligence: A Roadmap of Multi-Modal Data Pre-Training for Autonomous Systems [75.78934957242403]
Self-driving vehicles and drones require true Spatial Intelligence from multi-modal onboard sensor data.<n>This paper presents a framework for multi-modal pre-training, identifying the core set of techniques driving progress toward this goal.
arXiv Detail & Related papers (2025-12-30T17:58:01Z) - Can Foundation Models Revolutionize Mobile AR Sparse Sensing? [2.984076446975729]
We investigate whether foundation models can change the landscape of mobile sparse sensing.<n>Using real-world mobile AR data, our evaluations demonstrate that foundation models offer significant improvements in geometry-aware image warping.<n>Our study demonstrates the scalability of foundation model-based sparse sensing and shows its leading performance in 3D scene reconstruction.
arXiv Detail & Related papers (2025-11-04T03:06:51Z) - Rethinking Evaluation of Infrared Small Target Detection [105.59753496831739]
This paper introduces a hybrid-level metric incorporating pixel- and target-level performance, proposing a systematic error analysis method, and emphasizing the importance of cross-dataset evaluation.<n>An open-source toolkit has be released to facilitate standardized benchmarking.
arXiv Detail & Related papers (2025-09-21T02:45:07Z) - Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation [96.1872246747684]
Depth estimation is a fundamental task in 3D computer vision, crucial for applications such as 3D reconstruction, free-viewpoint rendering, robotics, autonomous driving, and AR/VR technologies.<n>Traditional methods relying on hardware sensors like LiDAR are often limited by high costs, low resolution, and environmental sensitivity, limiting their applicability in real-world scenarios.<n>Recent advances in vision-based methods offer a promising alternative, yet they face challenges in generalization and stability due to either the low-capacity model architectures or the reliance on domain-specific and small-scale datasets.
arXiv Detail & Related papers (2025-07-15T17:59:59Z) - Real Time Semantic Segmentation of High Resolution Automotive LiDAR Scans [6.113534791361164]
This study introduces a novel semantic segmentation framework tailored for modern high-resolution LiDAR sensors.<n>We propose a novel LiDAR dataset collected by a cutting-edge automotive 128 layer LiDAR in urban traffic scenes.<n>Our approach is bridging the gap between cutting-edge research and practical automotive applications.
arXiv Detail & Related papers (2025-04-30T13:00:50Z) - Small Object Detection: A Comprehensive Survey on Challenges, Techniques and Real-World Applications [0.15705429611931052]
Small object detection (SOD) is a critical yet challenging task in computer vision.<n>Recent advancements in deep learning have introduced innovative solutions.<n>Emerging trends such as lightweight neural networks, knowledge distillation (KD), and self-supervised learning offer promising directions for improving detection efficiency.
arXiv Detail & Related papers (2025-03-26T12:58:13Z) - What Really Matters for Learning-based LiDAR-Camera Calibration [50.2608502974106]
This paper revisits the development of learning-based LiDAR-Camera calibration.<n>We identify the critical limitations of regression-based methods with the widely used data generation pipeline.<n>We also investigate how the input data format and preprocessing operations impact network performance.
arXiv Detail & Related papers (2025-01-28T14:12:32Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.