Guided Model-based LiDAR Super-Resolution for Resource-Efficient Automotive scene Segmentation
- URL: http://arxiv.org/abs/2509.01317v1
- Date: Mon, 01 Sep 2025 10:01:40 GMT
- Title: Guided Model-based LiDAR Super-Resolution for Resource-Efficient Automotive scene Segmentation
- Authors: Alexandros Gkillas, Nikos Piperigkos, Aris S. Lalos,
- Abstract summary: High-resolution LiDAR data plays a critical role in 3D semantic segmentation for autonomous driving.<n>In contrast, low-cost sensors such as 16-channel LiDAR produce sparse point clouds that degrade segmentation accuracy.<n>We introduce the first end-to-end framework that jointly addresses LiDAR super-resolution (SR) and semantic segmentation.
- Score: 45.90080021000846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-resolution LiDAR data plays a critical role in 3D semantic segmentation for autonomous driving, but the high cost of advanced sensors limits large-scale deployment. In contrast, low-cost sensors such as 16-channel LiDAR produce sparse point clouds that degrade segmentation accuracy. To overcome this, we introduce the first end-to-end framework that jointly addresses LiDAR super-resolution (SR) and semantic segmentation. The framework employs joint optimization during training, allowing the SR module to incorporate semantic cues and preserve fine details, particularly for smaller object classes. A new SR loss function further directs the network to focus on regions of interest. The proposed lightweight, model-based SR architecture uses significantly fewer parameters than existing LiDAR SR approaches, while remaining easily compatible with segmentation networks. Experiments show that our method achieves segmentation performance comparable to models operating on high-resolution and costly 64-channel LiDAR data.
Related papers
- Real Time Semantic Segmentation of High Resolution Automotive LiDAR Scans [1.6093159644587223]
This study introduces a novel semantic segmentation framework tailored for modern high-resolution LiDAR sensors.<n>We propose a novel LiDAR dataset collected by a cutting-edge automotive 128 layer LiDAR in urban traffic scenes.<n>Our approach is bridging the gap between cutting-edge research and practical automotive applications.
arXiv Detail & Related papers (2025-04-30T13:00:50Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [53.58528891081709]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - From One to the Power of Many: Invariance to Multi-LiDAR Perception from Single-Sensor Datasets [12.712896458348515]
We introduce a new metric for feature-level invariance which can serve as a proxy to measure cross-domain generalization without requiring labeled data.<n>We propose two application-specific data augmentations, which facilitate better transfer to multi-sensor setups LiDAR, when trained on single-sensor datasets.
arXiv Detail & Related papers (2024-09-27T09:51:45Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.<n>Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.<n>This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Low-Rank Representations Meets Deep Unfolding: A Generalized and
Interpretable Network for Hyperspectral Anomaly Detection [41.50904949744355]
Current hyperspectral anomaly detection (HAD) benchmark datasets suffer from low resolution, simple background, and small size of the detection data.
These factors also limit the performance of the well-known low-rank representation (LRR) models in terms of robustness.
We build a new set of HAD benchmark datasets for improving the robustness of the HAD algorithm in complex scenarios, AIR-HAD for short.
arXiv Detail & Related papers (2024-02-23T14:15:58Z) - Benchmarking the Robustness of LiDAR Semantic Segmentation Models [78.6597530416523]
In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions.
We propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy.
We design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications.
arXiv Detail & Related papers (2023-01-03T06:47:31Z) - EfficientLPS: Efficient LiDAR Panoptic Segmentation [30.249379810530165]
We present the novel Efficient LiDAR Panoptic architecture that addresses multiple challenges in segmenting LiDAR point clouds.
EfficientLPS comprises of a novel shared backbone that encodes with strengthened geometric transformation modeling capacity.
We benchmark our proposed model on two large-scale LiDAR datasets.
arXiv Detail & Related papers (2021-02-16T08:14:52Z) - LiDAR-based Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
LiDAR-based panoptic segmentation aims to parse both objects and scenes in a unified manner.
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods.
arXiv Detail & Related papers (2020-11-24T08:44:46Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.