Sub-Meter Tree Height Mapping of California using Aerial Images and
LiDAR-Informed U-Net Model
- URL: http://arxiv.org/abs/2306.01936v1
- Date: Fri, 2 Jun 2023 22:29:58 GMT
- Title: Sub-Meter Tree Height Mapping of California using Aerial Images and
LiDAR-Informed U-Net Model
- Authors: Fabien H Wagner, Sophia Roberts, Alison L Ritz, Griffin Carter,
Ricardo Dalagnol, Samuel Favrichon, Mayumi CM Hirye, Martin Brandt, Philipe
Ciais and Sassan Saatchi
- Abstract summary: Tree canopy height is one of the most important indicators of forest biomass, productivity, and species diversity.
Here, we used a U-Net model adapted for regression to map the canopy height of all trees in the state of California with very high-resolution aerial imagery.
Our model successfully estimated canopy heights up to 50 m without saturation, outperforming existing canopy height products from global models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tree canopy height is one of the most important indicators of forest biomass,
productivity, and species diversity, but it is challenging to measure
accurately from the ground and from space. Here, we used a U-Net model adapted
for regression to map the canopy height of all trees in the state of California
with very high-resolution aerial imagery (60 cm) from the USDA-NAIP program.
The U-Net model was trained using canopy height models computed from aerial
LiDAR data as a reference, along with corresponding RGB-NIR NAIP images
collected in 2020. We evaluated the performance of the deep-learning model
using 42 independent 1 km$^2$ sites across various forest types and landscape
variations in California. Our predictions of tree heights exhibited a mean
error of 2.9 m and showed relatively low systematic bias across the entire
range of tree heights present in California. In 2020, trees taller than 5 m
covered ~ 19.3% of California. Our model successfully estimated canopy heights
up to 50 m without saturation, outperforming existing canopy height products
from global models. The approach we used allowed for the reconstruction of the
three-dimensional structure of individual trees as observed from nadir-looking
optical airborne imagery, suggesting a relatively robust estimation and mapping
capability, even in the presence of image distortion. These findings
demonstrate the potential of large-scale mapping and monitoring of tree height,
as well as potential biomass estimation, using NAIP imagery.
Related papers
- Depth Anything V2 [84.88796880335283]
V2 produces much finer and more robust depth predictions through three key practices.
We replace all labeled real images with synthetic images, scale up the capacity of our teacher model, and teach student models via the bridge of large-scale pseudo-labeled real images.
Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models.
arXiv Detail & Related papers (2024-06-13T17:59:56Z) - Forecasting with Hyper-Trees [50.72190208487953]
Hyper-Trees are designed to learn the parameters of a target time series model.
By relating the parameters of a target time series model to features, Hyper-Trees address the issue of parameter non-stationarity.
arXiv Detail & Related papers (2024-05-13T15:22:15Z) - First Mapping the Canopy Height of Primeval Forests in the Tallest Tree Area of Asia [6.826460268652235]
We have developed the world's first canopy height map of the distribution area of world-level giant trees.
This mapping is crucial for discovering more individual and community world-level giant trees.
arXiv Detail & Related papers (2024-04-23T01:45:55Z) - Estimation of forest height and biomass from open-access multi-sensor
satellite imagery and GEDI Lidar data: high-resolution maps of metropolitan
France [0.0]
This study uses a machine learning approach that was previously developed to produce local maps of forest parameters.
We used the GEDI Lidar mission as reference height data, and the satellite images from Sentinel-1, Sentinel-2 and ALOS-2 PALSA-2 to estimate forest height.
The height map is then derived into volume and aboveground biomass (AGB) using allometric equations.
arXiv Detail & Related papers (2023-10-23T07:58:49Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Very high resolution canopy height maps from RGB imagery using
self-supervised vision transformer and convolutional decoder trained on
Aerial Lidar [14.07306593230776]
This paper presents the first high-resolution canopy height map concurrently produced for multiple sub-national jurisdictions.
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020.
We also introduce a post-processing step using a convolutional network trained on GEDI observations.
arXiv Detail & Related papers (2023-04-14T15:52:57Z) - High-resolution canopy height map in the Landes forest (France) based on
GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach [0.044381279572631216]
We develop a deep learning model based on multi-stream remote sensing measurements to create a high-resolution canopy height map.
The model outputs allow us to generate a 10 m resolution canopy height map of the whole "Landes de Gascogne" forest area for 2020.
For all validation datasets in coniferous forests, our model showed better metrics than previous canopy height models available in the same region.
arXiv Detail & Related papers (2022-12-20T14:14:37Z) - Individual Tree Detection in Large-Scale Urban Environments using High-Resolution Multispectral Imagery [1.1661668662828382]
We introduce a novel deep learning method for detection of individual trees in urban environments.
We use a convolutional neural network to regress a confidence map indicating the locations of individual trees.
Our method provides complete spatial coverage by detecting trees in both public and private spaces.
arXiv Detail & Related papers (2022-08-22T21:26:57Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - A Multi-Stage model based on YOLOv3 for defect detection in PV panels
based on IR and Visible Imaging by Unmanned Aerial Vehicle [65.99880594435643]
We propose a novel model to detect panel defects on aerial images captured by unmanned aerial vehicle.
The model combines detections of panels and defects to refine its accuracy.
The proposed model has been validated on two big PV plants in the south of Italy.
arXiv Detail & Related papers (2021-11-23T08:04:32Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.