LiDAR Remote Sensing Meets Weak Supervision: Concepts, Methods, and Perspectives
- URL: http://arxiv.org/abs/2503.18384v1
- Date: Mon, 24 Mar 2025 06:51:38 GMT
- Title: LiDAR Remote Sensing Meets Weak Supervision: Concepts, Methods, and Perspectives
- Authors: Yuan Gao, Shaobo Xia, Pu Wang, Xiaohuan Xi, Sheng Nie, Cheng Wang,
- Abstract summary: This review adopts a unified weakly supervised learning perspective to examine research on LiDAR interpretation and inversion.<n>We summarize the latest advancements, provide a comprehensive review of the development and application of weakly supervised techniques in LiDAR remote sensing, and discuss potential future research directions in this field.
- Score: 16.213116971476083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR (Light Detection and Ranging) enables rapid and accurate acquisition of three-dimensional spatial data, widely applied in remote sensing areas such as surface mapping, environmental monitoring, urban modeling, and forestry inventory. LiDAR remote sensing primarily includes data interpretation and LiDAR-based inversion. However, LiDAR interpretation typically relies on dense and precise annotations, which are costly and time-consuming. Similarly, LiDAR inversion depends on scarce supervisory signals and expensive field surveys for annotations. To address this challenge, weakly supervised learning has gained significant attention in recent years, with many methods emerging to tackle LiDAR remote sensing tasks using incomplete, inaccurate, and inexact annotations, as well as annotations from other domains. Existing review articles treat LiDAR interpretation and inversion as separate tasks. This review, for the first time, adopts a unified weakly supervised learning perspective to systematically examine research on both LiDAR interpretation and inversion. We summarize the latest advancements, provide a comprehensive review of the development and application of weakly supervised techniques in LiDAR remote sensing, and discuss potential future research directions in this field.
Related papers
- Leveraging Sparse LiDAR for RAFT-Stereo: A Depth Pre-Fill Perspective [23.15129268391347]
We investigate LiDAR guidance within the RAFT-Stereo framework.<n>We aim to improve stereo matching accuracy by injecting precise LiDAR depth into the initial disparity map.<n>We find that the effectiveness of LiDAR guidance drastically degrades when the LiDAR points become sparse.
arXiv Detail & Related papers (2025-07-26T02:03:02Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.<n>Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.<n>This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Deep Learning for Trajectory Data Management and Mining: A Survey and Beyond [58.63558696061679]
Trajectory computing is crucial in various practical applications such as location services, urban traffic, and public safety.
We present a review of development and recent advances in deep learning for trajectory computing (DL4Traj)
Notably, we encapsulate recent advancements in Large Language Models (LLMs) that hold potential to augment trajectory computing.
arXiv Detail & Related papers (2024-03-21T05:57:27Z) - Sparse Beats Dense: Rethinking Supervision in Radar-Camera Depth Completion [18.0877558432168]
We present a new method with sparse LiDAR supervision to outperform previous dense LiDAR supervision methods in both accuracy and speed.
We find that depth completion models usually output depth maps containing significant stripe-like artifacts when trained by sparse LiDAR supervision.
Our framework with sparse supervision outperforms the state-of-the-art dense supervision methods with 11.6% improvement in Mean Absolute Error (MAE) and 1.6x speedup in Frame Per Second (FPS).
arXiv Detail & Related papers (2023-12-01T06:04:49Z) - Traj-LO: In Defense of LiDAR-Only Odometry Using an Effective
Continuous-Time Trajectory [20.452961476175812]
This letter explores the capability of LiDAR-only odometry through a continuous-time perspective.
Our proposed Traj-LO approach tries to recover the spatial-temporal consistent movement of LiDAR.
Our implementation is open-sourced on GitHub.
arXiv Detail & Related papers (2023-09-25T03:05:06Z) - SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - Detecting the Anomalies in LiDAR Pointcloud [8.827947115933942]
Adverse weather conditions may cause the LiDAR to produce pointcloud with abnormal patterns such as scattered noise points and uncommon intensity values.
We propose a novel approach to detect whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud characteristics.
arXiv Detail & Related papers (2023-07-31T22:53:42Z) - LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
Detection [96.63947479020631]
In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets.
We propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
arXiv Detail & Related papers (2022-03-28T17:59:02Z) - Learning Moving-Object Tracking with FMCW LiDAR [53.05551269151209]
We propose a learning-based moving-object tracking method utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave (FMCW) LiDAR.
Given the labels, we propose a contrastive learning framework, which pulls together the features from the same instance in embedding space and pushes apart the features from different instances to improve the tracking quality.
arXiv Detail & Related papers (2022-03-02T09:11:36Z) - LEAD: LiDAR Extender for Autonomous Driving [48.233424487002445]
MEMS LiDAR emerges with irresistible trend due to its lower cost, more robust, and meeting the mass-production standards.
It suffers small field of view (FoV), slowing down the step of its population.
We propose LEAD, i.e., LiDAR Extender for Autonomous Driving, to extend the MEMS LiDAR by coupled image w.r.t both FoV and range.
arXiv Detail & Related papers (2021-02-16T07:35:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.