Forest canopy height estimation from satellite RGB imagery using large-scale airborne LiDAR-derived training data and monocular depth estimation
- URL: http://arxiv.org/abs/2602.06503v2
- Date: Mon, 09 Feb 2026 05:24:29 GMT
- Title: Forest canopy height estimation from satellite RGB imagery using large-scale airborne LiDAR-derived training data and monocular depth estimation
- Authors: Yongkang Lai, Xihan Mu, Dasheng Fan, Donghui Xie, Shanxin Guo, Wenli Huang, Tianjie Zhao, Guangjian Yan,
- Abstract summary: Large-scale, high-resolution forest canopy height mapping plays a crucial role in understanding regional and global carbon and water cycles.<n>Near-surface LiDAR platforms offer much finer measurements of forest canopy structure.<n>State-of-the-art monocular depth estimation model, Depth Anything V2, was trained using 16,000 km2 of canopy height models.
- Score: 2.3321503459324915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale, high-resolution forest canopy height mapping plays a crucial role in understanding regional and global carbon and water cycles. Spaceborne LiDAR missions, including the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) and the Global Ecosystem Dynamics Investigation (GEDI), provide global observations of forest structure but are spatially sparse and subject to inherent uncertainties. In contrast, near-surface LiDAR platforms, such as airborne and unmanned aerial vehicle (UAV) LiDAR systems, offer much finer measurements of forest canopy structure, and a growing number of countries have made these datasets openly available. In this study, a state-of-the-art monocular depth estimation model, Depth Anything V2, was trained using approximately 16,000 km2 of canopy height models (CHMs) derived from publicly available airborne LiDAR point clouds and related products across multiple countries, together with 3 m resolution PlanetScope and airborne RGB imagery. The trained model, referred to as Depth2CHM, enables the estimation of spatially continuous CHMs directly from PlanetScope RGB imagery. Independent validation was conducted at sites in China (approximately 1 km2) and the United States (approximately 116 km2). The results showed that Depth2CHM could accurately estimate canopy height, with biases of 0.59 m and 0.41 m and root mean square errors (RMSEs) of 2.54 m and 5.75 m for these two sites, respectively. Compared with an existing global meter-resolution CHM product, the mean absolute error is reduced by approximately 1.5 m and the RMSE by approximately 2 m. These results demonstrated that monocular depth estimation networks trained with large-scale airborne LiDAR-derived canopy height data provide a promising and scalable pathway for high-resolution, spatially continuous forest canopy height estimation from satellite RGB imagery.
Related papers
- VibrantSR: Sub-Meter Canopy Height Models from Sentinel-2 Using Generative Flow Matching [0.0]
VibrantSR is a framework for estimating 0.5 meter canopy height models from 10 meter Sentinel-2 imagery.<n>VibrantSR is evaluated across 22 EPA Level 3 eco-regions in the western United States.<n>It achieves a Mean Absolute Error of 4.39 meters for canopy heights >= 2 m, outperforming Meta (4.83 m), LANDFIRE (5.96 m), and ETH (7.05 m) satellite-based benchmarks.
arXiv Detail & Related papers (2026-01-14T20:56:35Z) - OccuFly: A 3D Vision Benchmark for Semantic Scene Completion from the Aerial Perspective [44.84496929237721]
OccuFly is the first real-world, camera-based aerial semantic scene completion benchmark, captured at altitudes of 50m, 40m, and 30m.<n>We propose a LiDAR-free data generation framework based on camera modality, which is ubiquitous on modern UAVs.<n>We benchmark the state-of-the-art on OccuFly and highlight challenges specific to elevated viewpoints.
arXiv Detail & Related papers (2025-12-23T21:14:55Z) - Super-Resolved Canopy Height Mapping from Sentinel-2 Time Series Using LiDAR HD Reference Data across Metropolitan France [0.9351726364879229]
We introduce THREASURE-Net, a novel end-to-end framework for Tree Height Regression And Super-Resolution.<n>The model is trained on Sentinel-2 time series using reference height metrics derived from LiDAR HD data.<n>We evaluate three model variants, producing tree-height predictions at 2.5 m, 5 m, and 10 m resolution.
arXiv Detail & Related papers (2025-12-12T12:49:16Z) - UAV-MM3D: A Large-Scale Synthetic Benchmark for 3D Perception of Unmanned Aerial Vehicles with Multi-Modal Data [47.317955428393134]
We introduce UAV-MM3D, a high-fidelity multimodal synthetic dataset for low-altitude UAV perception and motion understanding.<n>It comprises 400K synchronized frames across diverse scenes (urban areas, suburbs, forests, coastal regions) and weather conditions.<n>Each frame provides 2D/3D bounding boxes, 6-DoF poses, and instance-level annotations, enabling core tasks related to UAVs such as 3D detection, pose estimation, target tracking, and short-term trajectory forecasting.
arXiv Detail & Related papers (2025-11-27T12:30:28Z) - Estimating the Diameter at Breast Height of Trees in a Forest With a Single 360 Camera [52.85399274741336]
Forest inventories rely on accurate measurements of the diameter at breast height (DBH) for ecological monitoring, resource management, and carbon accounting.<n>While LiDAR-based techniques can achieve centimeter-level precision, they are cost-prohibitive and operationally complex.<n>We present a low-cost alternative that only needs a consumer-grade 360 video camera.
arXiv Detail & Related papers (2025-05-06T01:09:07Z) - A Deep Learning Approach to Estimate Canopy Height and Uncertainty by Integrating Seasonal Optical, SAR and Limited GEDI LiDAR Data over Northern Forests [0.0]
This study introduces a methodology for generating spatially continuous, high-resolution canopy height and uncertainty estimates.
We integrate multi-source, multi-seasonal satellite data from Sentinel-1, Landsat, and ALOS-PALSAR-2 with spaceborne GEDI LiDAR as reference data.
Using seasonal data instead of summer-only data improved variability by 10%, reduced error by 0.45 m, and decreased bias by 1 m.
arXiv Detail & Related papers (2024-10-08T20:27:11Z) - Reconstructing Satellites in 3D from Amateur Telescope Images [44.20773507571372]
We propose a novel computational imaging framework that overcomes obstacles by integrating a hybrid image pre-processing pipeline.<n>We validate our approach on both synthetic satellite datasets and on-sky observations of China's Tiangong Space Station and the International Space Station.<n>Our framework enables high-fidelity 3D satellite monitoring from Earth, offering a cost-effective alternative for space situational awareness.
arXiv Detail & Related papers (2024-04-29T03:13:09Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Very high resolution canopy height maps from RGB imagery using
self-supervised vision transformer and convolutional decoder trained on
Aerial Lidar [14.07306593230776]
This paper presents the first high-resolution canopy height map concurrently produced for multiple sub-national jurisdictions.
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020.
We also introduce a post-processing step using a convolutional network trained on GEDI observations.
arXiv Detail & Related papers (2023-04-14T15:52:57Z) - High-resolution canopy height map in the Landes forest (France) based on
GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach [0.044381279572631216]
We develop a deep learning model based on multi-stream remote sensing measurements to create a high-resolution canopy height map.
The model outputs allow us to generate a 10 m resolution canopy height map of the whole "Landes de Gascogne" forest area for 2020.
For all validation datasets in coniferous forests, our model showed better metrics than previous canopy height models available in the same region.
arXiv Detail & Related papers (2022-12-20T14:14:37Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.