Unsupervised deep learning for semantic segmentation of multispectral LiDAR forest point clouds
- URL: http://arxiv.org/abs/2502.06227v1
- Date: Mon, 10 Feb 2025 07:58:49 GMT
- Title: Unsupervised deep learning for semantic segmentation of multispectral LiDAR forest point clouds
- Authors: Lassi Ruoppa, Oona Oinonen, Josef Taher, Matti Lehtomäki, Narges Takhtkeshha, Antero Kukko, Harri Kaartinen, Juha Hyyppä,
- Abstract summary: This study proposes a fully unsupervised deep learning method for leaf-wood separation of high-density laser scanning point clouds.<n>GrowSP-ForMS achieved a mean accuracy of 84.3% and a mean intersection over union (mIoU) of 69.6% on our MS test set.
- Score: 1.6633665061166945
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Point clouds captured with laser scanning systems from forest environments can be utilized in a wide variety of applications within forestry and plant ecology, such as the estimation of tree stem attributes, leaf angle distribution, and above-ground biomass. However, effectively utilizing the data in such tasks requires the semantic segmentation of the data into wood and foliage points, also known as leaf-wood separation. The traditional approach to leaf-wood separation has been geometry- and radiometry-based unsupervised algorithms, which tend to perform poorly on data captured with airborne laser scanning (ALS) systems, even with a high point density. While recent machine and deep learning approaches achieve great results even on sparse point clouds, they require manually labeled training data, which is often extremely laborious to produce. Multispectral (MS) information has been demonstrated to have potential for improving the accuracy of leaf-wood separation, but quantitative assessment of its effects has been lacking. This study proposes a fully unsupervised deep learning method, GrowSP-ForMS, which is specifically designed for leaf-wood separation of high-density MS ALS point clouds and based on the GrowSP architecture. GrowSP-ForMS achieved a mean accuracy of 84.3% and a mean intersection over union (mIoU) of 69.6% on our MS test set, outperforming the unsupervised reference methods by a significant margin. When compared to supervised deep learning methods, our model performed similarly to the slightly older PointNet architecture but was outclassed by more recent approaches. Finally, two ablation studies were conducted, which demonstrated that our proposed changes increased the test set mIoU of GrowSP-ForMS by 29.4 percentage points (pp) in comparison to the original GrowSP model and that utilizing MS data improved the mIoU by 5.6 pp from the monospectral case.
Related papers
- PointsToWood: A deep learning framework for complete canopy leaf-wood segmentation of TLS data across diverse European forests [0.0]
We show a new framework that uses a deep learning architecture newly developed from PointNet and point clouds for processing 3D point clouds.
We evaluate its performance across open datasets from boreal, temperate, Mediterranean and tropical regions.
Results show consistent outperformance against the most widely used PointNet based approach for leaf/wood segmentation.
arXiv Detail & Related papers (2025-03-06T13:23:03Z) - Multi-modal classification of forest biodiversity potential from 2D orthophotos and 3D airborne laser scanning point clouds [47.679877727066206]
This study investigates whether deep learning-based fusion of close-range sensing data from 2D orthophotos and 3D airborne laser scanning (ALS) point clouds can enhance biodiversity assessment.<n>We introduce the BioVista dataset, comprising 44.378 paired samples of orthophotos and ALS point clouds from temperate forests in Denmark.<n>Using deep neural networks (ResNet for orthophotos and PointResNet for ALS point clouds), we investigate each data modality's ability to assess forest biodiversity potential, achieving mean accuracies of 69.4% and 72.8%, respectively.
arXiv Detail & Related papers (2025-01-03T09:42:25Z) - An Enhanced Classification Method Based on Adaptive Multi-Scale Fusion for Long-tailed Multispectral Point Clouds [67.96583737413296]
We propose an enhanced classification method based on adaptive multi-scale fusion for MPCs with long-tailed distributions.<n>In the training set generation stage, a grid-balanced sampling strategy is designed to reliably generate training samples from sparse labeled datasets.<n>In the feature learning stage, a multi-scale feature fusion module is proposed to fuse shallow features of land-covers at different scales.
arXiv Detail & Related papers (2024-12-16T03:21:20Z) - Tree Species Classification using Machine Learning and 3D Tomographic SAR -- a case study in Northern Europe [0.0]
Tree species classification plays an important role in nature conservation, forest inventories, forest management, and the protection of endangered species.
In this study, we employed TomoSense, a 3D tomographic dataset, which utilizes a stack of single-look complex (SLC) images.
arXiv Detail & Related papers (2024-11-19T22:25:26Z) - SOOD++: Leveraging Unlabeled Data to Boost Oriented Object Detection [59.868772767818975]
We propose a simple yet effective Semi-supervised Oriented Object Detection method termed SOOD++.
Specifically, we observe that objects from aerial images are usually arbitrary orientations, small scales, and aggregation.
Extensive experiments conducted on various multi-oriented object datasets under various labeled settings demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-07-01T07:03:51Z) - SegmentAnyTree: A sensor and platform agnostic deep learning model for
tree segmentation using laser scanning data [15.438892555484616]
This research advances individual tree crown (ITC) segmentation in lidar data, using a deep learning model applicable to various laser scanning types.
It addresses the challenge of transferability across different data characteristics in 3D forest scene analysis.
The model, based on PointGroup architecture, is a 3D CNN with separate heads for semantic and instance segmentation.
arXiv Detail & Related papers (2024-01-28T19:47:17Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Automated forest inventory: analysis of high-density airborne LiDAR
point clouds with 3D deep learning [16.071397465972893]
ForAINet is able to perform a segmentation across diverse forest types and geographic regions.
System has been tested on FOR-Instance, a dataset of point clouds that have been acquired in five different countries using surveying drones.
arXiv Detail & Related papers (2023-12-22T21:54:35Z) - TreeLearn: A deep learning method for segmenting individual trees from ground-based LiDAR forest point clouds [40.46280139210502]
TreeLearn is a deep learning approach for tree instance segmentation of forest point clouds.<n>TreeLearn is trained on already segmented point clouds in a data-driven manner.<n>We trained TreeLearn on forest point clouds of 6665 trees, labeled using the Lidar360 software.
arXiv Detail & Related papers (2023-09-15T15:20:16Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - SSL-SoilNet: A Hybrid Transformer-based Framework with Self-Supervised Learning for Large-scale Soil Organic Carbon Prediction [2.554658234030785]
This study introduces a novel approach that aims to learn the geographical link between multimodal features via self-supervised contrastive learning.
The proposed approach has undergone rigorous testing on two distinct large-scale datasets.
arXiv Detail & Related papers (2023-08-07T13:44:44Z) - Spatial and spectral deep attention fusion for multi-channel speech
separation using deep embedding features [60.20150317299749]
Multi-channel deep clustering (MDC) has acquired a good performance for speech separation.
We propose a deep attention fusion method to dynamically control the weights of the spectral and spatial features and combine them deeply.
Experimental results show that the proposed method outperforms MDC baseline and even better than the ideal binary mask (IBM)
arXiv Detail & Related papers (2020-02-05T03:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.