Sub-Meter Tree Height Mapping of California using Aerial Images and
LiDAR-Informed U-Net Model
- URL: http://arxiv.org/abs/2306.01936v1
- Date: Fri, 2 Jun 2023 22:29:58 GMT
- Title: Sub-Meter Tree Height Mapping of California using Aerial Images and
LiDAR-Informed U-Net Model
- Authors: Fabien H Wagner, Sophia Roberts, Alison L Ritz, Griffin Carter,
Ricardo Dalagnol, Samuel Favrichon, Mayumi CM Hirye, Martin Brandt, Philipe
Ciais and Sassan Saatchi
- Abstract summary: Tree canopy height is one of the most important indicators of forest biomass, productivity, and species diversity.
Here, we used a U-Net model adapted for regression to map the canopy height of all trees in the state of California with very high-resolution aerial imagery.
Our model successfully estimated canopy heights up to 50 m without saturation, outperforming existing canopy height products from global models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tree canopy height is one of the most important indicators of forest biomass,
productivity, and species diversity, but it is challenging to measure
accurately from the ground and from space. Here, we used a U-Net model adapted
for regression to map the canopy height of all trees in the state of California
with very high-resolution aerial imagery (60 cm) from the USDA-NAIP program.
The U-Net model was trained using canopy height models computed from aerial
LiDAR data as a reference, along with corresponding RGB-NIR NAIP images
collected in 2020. We evaluated the performance of the deep-learning model
using 42 independent 1 km$^2$ sites across various forest types and landscape
variations in California. Our predictions of tree heights exhibited a mean
error of 2.9 m and showed relatively low systematic bias across the entire
range of tree heights present in California. In 2020, trees taller than 5 m
covered ~ 19.3% of California. Our model successfully estimated canopy heights
up to 50 m without saturation, outperforming existing canopy height products
from global models. The approach we used allowed for the reconstruction of the
three-dimensional structure of individual trees as observed from nadir-looking
optical airborne imagery, suggesting a relatively robust estimation and mapping
capability, even in the presence of image distortion. These findings
demonstrate the potential of large-scale mapping and monitoring of tree height,
as well as potential biomass estimation, using NAIP imagery.
Related papers
- Depth Anything V2 [84.88796880335283]
V2 produces much finer and more robust depth predictions through three key practices.
We replace all labeled real images with synthetic images, scale up the capacity of our teacher model, and teach student models via the bridge of large-scale pseudo-labeled real images.
Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models.
arXiv Detail & Related papers (2024-06-13T17:59:56Z) - Forecasting with Hyper-Trees [50.72190208487953]
Hyper-Trees are designed to learn the parameters of time series models.
By relating the parameters of a target time series model to features, Hyper-Trees also address the issue of parameter non-stationarity.
In this novel approach, the trees first generate informative representations from the input features, which a shallow network then maps to the target model parameters.
arXiv Detail & Related papers (2024-05-13T15:22:15Z) - First Mapping the Canopy Height of Primeval Forests in the Tallest Tree Area of Asia [6.826460268652235]
We have developed the world's first canopy height map of the distribution area of world-level giant trees.
This mapping is crucial for discovering more individual and community world-level giant trees.
arXiv Detail & Related papers (2024-04-23T01:45:55Z) - Individual mapping of large polymorphic shrubs in high mountains using satellite images and deep learning [1.6889377382676625]
We release a large dataset of individual shrub delineations on freely available satellite imagery.
We use an instance segmentation model to map all junipers over the treeline for an entire biosphere reserve.
Our model achieved an F1-score in shrub delineation of 87.87% on the PI data and 76.86% on the FW data.
arXiv Detail & Related papers (2024-01-31T16:44:20Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Very high resolution canopy height maps from RGB imagery using
self-supervised vision transformer and convolutional decoder trained on
Aerial Lidar [14.07306593230776]
This paper presents the first high-resolution canopy height map concurrently produced for multiple sub-national jurisdictions.
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020.
We also introduce a post-processing step using a convolutional network trained on GEDI observations.
arXiv Detail & Related papers (2023-04-14T15:52:57Z) - High-resolution canopy height map in the Landes forest (France) based on
GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach [0.044381279572631216]
We develop a deep learning model based on multi-stream remote sensing measurements to create a high-resolution canopy height map.
The model outputs allow us to generate a 10 m resolution canopy height map of the whole "Landes de Gascogne" forest area for 2020.
For all validation datasets in coniferous forests, our model showed better metrics than previous canopy height models available in the same region.
arXiv Detail & Related papers (2022-12-20T14:14:37Z) - Individual Tree Detection in Large-Scale Urban Environments using High-Resolution Multispectral Imagery [1.1661668662828382]
We introduce a novel deep learning method for detection of individual trees in urban environments.
We use a convolutional neural network to regress a confidence map indicating the locations of individual trees.
Our method provides complete spatial coverage by detecting trees in both public and private spaces.
arXiv Detail & Related papers (2022-08-22T21:26:57Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - A Multi-Stage model based on YOLOv3 for defect detection in PV panels
based on IR and Visible Imaging by Unmanned Aerial Vehicle [65.99880594435643]
We propose a novel model to detect panel defects on aerial images captured by unmanned aerial vehicle.
The model combines detections of panels and defects to refine its accuracy.
The proposed model has been validated on two big PV plants in the south of Italy.
arXiv Detail & Related papers (2021-11-23T08:04:32Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.