Multispectral LiDAR data for extracting tree points in urban and suburban areas
- URL: http://arxiv.org/abs/2508.19881v1
- Date: Wed, 27 Aug 2025 13:39:13 GMT
- Title: Multispectral LiDAR data for extracting tree points in urban and suburban areas
- Authors: Narges Takhtkeshha, Gabriele Mazzacca, Fabio Remondino, Juha Hyyppä, Gottfried Mandlburger,
- Abstract summary: Multispectral (MS) light detection and ranging (LiDAR) improves this by capturing both 3D spatial and spectral data.<n>This study explores tree point extraction using MS-LiDAR and deep learning (DL) models.
- Score: 3.2554829321615717
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Monitoring urban tree dynamics is vital for supporting greening policies and reducing risks to electrical infrastructure. Airborne laser scanning has advanced large-scale tree management, but challenges remain due to complex urban environments and tree variability. Multispectral (MS) light detection and ranging (LiDAR) improves this by capturing both 3D spatial and spectral data, enabling detailed mapping. This study explores tree point extraction using MS-LiDAR and deep learning (DL) models. Three state-of-the-art models are evaluated: Superpoint Transformer (SPT), Point Transformer V3 (PTv3), and Point Transformer V1 (PTv1). Results show the notable time efficiency and accuracy of SPT, with a mean intersection over union (mIoU) of 85.28%. The highest detection accuracy is achieved by incorporating pseudo normalized difference vegetation index (pNDVI) with spatial data, reducing error rate by 10.61 percentage points (pp) compared to using spatial information alone. These findings highlight the potential of MS-LiDAR and DL to improve tree extraction and further tree inventories.
Related papers
- TransBridge: Boost 3D Object Detection by Scene-Level Completion with Transformer Decoder [66.22997415145467]
This paper presents a joint completion and detection framework that improves the detection feature in sparse areas.<n> Specifically, we propose TransBridge, a novel transformer-based up-sampling block that fuses the features from the detection and completion networks.<n>The results show that our framework consistently improves end-to-end 3D object detection, with the mean average precision (mAP) ranging from 0.7 to 1.5 across multiple methods.
arXiv Detail & Related papers (2025-12-12T00:08:03Z) - 3D forest semantic segmentation using multispectral LiDAR and 3D deep learning [2.7821416427202816]
This study aims to investigate the potential of MS-LiDAR data captured by the HeliALS system.<n>It provides high-density multispectral point clouds to segment forests into six components: ground, low vegetation, trunks, branches, foliage, and woody debris.<n>Three point-wise 3D deep learning models and one machine learning model, including kernel point convolution, superpoint transformer, point transformer V3, and random forest, are implemented.
arXiv Detail & Related papers (2025-07-08T19:18:21Z) - Unsupervised deep learning for semantic segmentation of multispectral LiDAR forest point clouds [1.6633665061166945]
This study proposes a fully unsupervised deep learning method for leaf-wood separation of high-density laser scanning point clouds.<n>GrowSP-ForMS achieved a mean accuracy of 84.3% and a mean intersection over union (mIoU) of 69.6% on our MS test set.
arXiv Detail & Related papers (2025-02-10T07:58:49Z) - DepthSplat: Connecting Gaussian Splatting and Depth [90.06180236292866]
We present DepthSplat to connect Gaussian splatting and depth estimation.<n>We show that Gaussian splatting can serve as an unsupervised pre-training objective for learning powerful depth models.<n>Our DepthSplat achieves state-of-the-art performance on ScanNet, RealEstate10K and DL3DV datasets.
arXiv Detail & Related papers (2024-10-17T17:59:58Z) - Improving Gaussian Splatting with Localized Points Management [52.009874685460694]
Localized Point Management (LPM) is capable of identifying those error-contributing zones in greatest need for both point addition and geometry calibration.<n>LPM applies point densification in the identified zones and then reset the opacity of the points in front of these regions, creating a new opportunity to correct poorly conditioned points.<n> Notably, LPM improves both static 3DGS and dynamic SpaceTimeGS to achieve state-of-the-art rendering quality while retaining real-time speeds.
arXiv Detail & Related papers (2024-06-06T16:55:07Z) - LiDAR data acquisition and processing for ecology applications [0.0]
Terrestrial laser scanners (TLS) have been used in ecology to reconstruct the 3D structure of vegetation.
The orientation of LiDAR was modified to make observations in the vertical plane and a motor was integrated for its rotation.
From the data generated, histograms of point density variation along the vegetation height were created.
arXiv Detail & Related papers (2024-01-11T13:03:27Z) - Estimating optical vegetation indices and biophysical variables for temperate forests with Sentinel-1 SAR data using machine learning techniques: A case study for Czechia [32.19783248549554]
Current optical vegetation indices (VIs) for monitoring forest ecosystems are well established and widely used in various applications.
In contrast, synthetic aperture radar (SAR) data can offer insightful and systematic forest monitoring with complete time series (TS) due to signal penetration through clouds and day and night image acquisitions.
This study aims to address the limitations of optical satellite data by using SAR data as an alternative for estimating optical VIs for forests through machine learning (ML)
In general, up to 240 measurements per year and a spatial resolution of 20 m can be achieved using estimated SAR-based VIs with high accuracy.
arXiv Detail & Related papers (2023-11-13T18:23:46Z) - UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM [60.575435353047304]
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM)
We propose an online framework for sensor uncertainty estimation that can be trained in a self-supervised manner from only 2D input data.
arXiv Detail & Related papers (2023-06-19T16:26:25Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.