Multi-modal classification of forest biodiversity potential from 2D orthophotos and 3D airborne laser scanning point clouds
- URL: http://arxiv.org/abs/2501.01728v1
- Date: Fri, 03 Jan 2025 09:42:25 GMT
- Title: Multi-modal classification of forest biodiversity potential from 2D orthophotos and 3D airborne laser scanning point clouds
- Authors: Simon B. Jensen, Stefan Oehmcke, Andreas Møgelmose, Meysam Madadi, Christian Igel, Sergio Escalera, Thomas B. Moeslund,
- Abstract summary: This study investigates whether deep learning-based fusion of close-range sensing data from 2D orthophotos and 3D airborne laser scanning (ALS) point clouds can enhance biodiversity assessment.
We introduce the BioVista dataset, comprising 44.378 paired samples of orthophotos and ALS point clouds from temperate forests in Denmark.
Using deep neural networks (ResNet for orthophotos and PointResNet for ALS point clouds), we investigate each data modality's ability to assess forest biodiversity potential, achieving mean accuracies of 69.4% and 72.8%, respectively.
- Score: 47.679877727066206
- License:
- Abstract: Accurate assessment of forest biodiversity is crucial for ecosystem management and conservation. While traditional field surveys provide high-quality assessments, they are labor-intensive and spatially limited. This study investigates whether deep learning-based fusion of close-range sensing data from 2D orthophotos (12.5 cm resolution) and 3D airborne laser scanning (ALS) point clouds (8 points/m^2) can enhance biodiversity assessment. We introduce the BioVista dataset, comprising 44.378 paired samples of orthophotos and ALS point clouds from temperate forests in Denmark, designed to explore multi-modal fusion approaches for biodiversity potential classification. Using deep neural networks (ResNet for orthophotos and PointVector for ALS point clouds), we investigate each data modality's ability to assess forest biodiversity potential, achieving mean accuracies of 69.4% and 72.8%, respectively. We explore two fusion approaches: a confidence-based ensemble method and a feature-level concatenation strategy, with the latter achieving a mean accuracy of 75.5%. Our results demonstrate that spectral information from orthophotos and structural information from ALS point clouds effectively complement each other in forest biodiversity assessment.
Related papers
- Unsupervised deep learning for semantic segmentation of multispectral LiDAR forest point clouds [1.6633665061166945]
This study proposes a fully unsupervised deep learning method for leaf-wood separation of high-density laser scanning point clouds.
GrowSP-ForMS achieved a mean accuracy of 84.3% and a mean intersection over union (mIoU) of 69.6% on our MS test set.
arXiv Detail & Related papers (2025-02-10T07:58:49Z) - Advanced wood species identification based on multiple anatomical sections and using deep feature transfer and fusion [8.844437603161198]
Methods like DNA analysis, Near Infrared (NIR) spectroscopy, and Direct Analysis in Real Time (DART) mass spectrometry complement the long-established wood anatomical assessment of cell and tissue morphology.
Most of these methods have some limitations such as high costs, the need for skilled experts for data interpretation, and the lack of good datasets for professional reference.
In this paper, we apply two transfer learning techniques with Convolutional Neural Networks to a multi-view Congolese wood species dataset.
Our results indicate superior accuracy on diverse datasets and anatomical sections, surpassing the results of other methods.
arXiv Detail & Related papers (2024-04-12T16:30:15Z) - PointHPS: Cascaded 3D Human Pose and Shape Estimation from Point Clouds [99.60575439926963]
We propose a principled framework, PointHPS, for accurate 3D HPS from point clouds captured in real-world settings.
PointHPS iteratively refines point features through a cascaded architecture.
Extensive experiments demonstrate that PointHPS, with its powerful point feature extraction and processing scheme, outperforms State-of-the-Art methods.
arXiv Detail & Related papers (2023-08-28T11:10:14Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Evaluation of the potential of Near Infrared Hyperspectral Imaging for
monitoring the invasive brown marmorated stink bug [53.682955739083056]
The brown marmorated stink bug (BMSB), Halyomorpha halys, is an invasive insect pest of global importance that damages several crops.
The present study consists in a preliminary evaluation at the laboratory level of Near Infrared Hyperspectral Imaging (NIR-HSI) as a possible technology to detect BMSB specimens.
arXiv Detail & Related papers (2023-01-19T11:37:20Z) - Classification of Single Tree Decay Stages from Combined Airborne LiDAR
Data and CIR Imagery [1.4589991363650008]
This study, for the first time, automatically categorizing individual trees (Norway spruce) into five decay stages.
Three different Machine Learning methods - 3D point cloud-based deep learning (KPConv), Convolutional Neural Network (CNN), and Random Forest (RF)
All models achieved promising results, reaching overall accuracy (OA) of up to 88.8%, 88.4% and 85.9% for KPConv, CNN and RF, respectively.
arXiv Detail & Related papers (2023-01-04T22:20:16Z) - Information fusion approach for biomass estimation in a plateau
mountainous forest using a synergistic system comprising UAS-based digital
camera and LiDAR [9.944631732226657]
The objective of this study was to quantify the aboveground biomass (AGB) of a plateau mountainous forest reserve.
We utilized digital aerial photogrammetry (DAP), which has the unique advantages of speed, high spatial resolution, and low cost.
Based on the CHM and spectral attributes obtained from multispectral images, we estimated and mapped the AGB of the region of interest with considerable cost efficiency.
arXiv Detail & Related papers (2022-04-14T04:04:59Z) - Deep Learning Based 3D Point Cloud Regression for Estimating Forest
Biomass [15.956463815168034]
Knowledge of forest biomass stocks and their development is important for implementing effective climate change mitigation measures.
Remote sensing using airborne LiDAR can be used to measure vegetation biomass at large scale.
We present deep learning systems for predicting wood volume, above-ground biomass (AGB), and subsequently carbon directly from 3D LiDAR point cloud data.
arXiv Detail & Related papers (2021-12-21T16:26:13Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - AdaZoom: Adaptive Zoom Network for Multi-Scale Object Detection in Large
Scenes [57.969186815591186]
Detection in large-scale scenes is a challenging problem due to small objects and extreme scale variation.
We propose a novel Adaptive Zoom (AdaZoom) network as a selective magnifier with flexible shape and focal length to adaptively zoom the focus regions for object detection.
arXiv Detail & Related papers (2021-06-19T03:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.