High-resolution canopy height map in the Landes forest (France) based on
GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach
- URL: http://arxiv.org/abs/2212.10265v1
- Date: Tue, 20 Dec 2022 14:14:37 GMT
- Title: High-resolution canopy height map in the Landes forest (France) based on
GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach
- Authors: Martin Schwartz, Philippe Ciais, Catherine Ottl\'e, Aurelien De
Truchis, Cedric Vega, Ibrahim Fayad, Martin Brandt, Rasmus Fensholt, Nicolas
Baghdadi, Fran\c{c}ois Morneau, David Morin, Dominique Guyon, Sylvia Dayau,
Jean-Pierre Wigneron
- Abstract summary: We develop a deep learning model based on multi-stream remote sensing measurements to create a high-resolution canopy height map.
The model outputs allow us to generate a 10 m resolution canopy height map of the whole "Landes de Gascogne" forest area for 2020.
For all validation datasets in coniferous forests, our model showed better metrics than previous canopy height models available in the same region.
- Score: 0.044381279572631216
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In intensively managed forests in Europe, where forests are divided into
stands of small size and may show heterogeneity within stands, a high spatial
resolution (10 - 20 meters) is arguably needed to capture the differences in
canopy height. In this work, we developed a deep learning model based on
multi-stream remote sensing measurements to create a high-resolution canopy
height map over the "Landes de Gascogne" forest in France, a large maritime
pine plantation of 13,000 km$^2$ with flat terrain and intensive management.
This area is characterized by even-aged and mono-specific stands, of a typical
length of a few hundred meters, harvested every 35 to 50 years. Our deep
learning U-Net model uses multi-band images from Sentinel-1 and Sentinel-2 with
composite time averages as input to predict tree height derived from GEDI
waveforms. The evaluation is performed with external validation data from
forest inventory plots and a stereo 3D reconstruction model based on Skysat
imagery available at specific locations. We trained seven different U-net
models based on a combination of Sentinel-1 and Sentinel-2 bands to evaluate
the importance of each instrument in the dominant height retrieval. The model
outputs allow us to generate a 10 m resolution canopy height map of the whole
"Landes de Gascogne" forest area for 2020 with a mean absolute error of 2.02 m
on the Test dataset. The best predictions were obtained using all available
satellite layers from Sentinel-1 and Sentinel-2 but using only one satellite
source also provided good predictions. For all validation datasets in
coniferous forests, our model showed better metrics than previous canopy height
models available in the same region.
Related papers
- Depth Anything V2 [84.88796880335283]
V2 produces much finer and more robust depth predictions through three key practices.
We replace all labeled real images with synthetic images, scale up the capacity of our teacher model, and teach student models via the bridge of large-scale pseudo-labeled real images.
Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models.
arXiv Detail & Related papers (2024-06-13T17:59:56Z) - First Mapping the Canopy Height of Primeval Forests in the Tallest Tree Area of Asia [6.826460268652235]
We have developed the world's first canopy height map of the distribution area of world-level giant trees.
This mapping is crucial for discovering more individual and community world-level giant trees.
arXiv Detail & Related papers (2024-04-23T01:45:55Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - Estimation of forest height and biomass from open-access multi-sensor
satellite imagery and GEDI Lidar data: high-resolution maps of metropolitan
France [0.0]
This study uses a machine learning approach that was previously developed to produce local maps of forest parameters.
We used the GEDI Lidar mission as reference height data, and the satellite images from Sentinel-1, Sentinel-2 and ALOS-2 PALSA-2 to estimate forest height.
The height map is then derived into volume and aboveground biomass (AGB) using allometric equations.
arXiv Detail & Related papers (2023-10-23T07:58:49Z) - Accuracy and Consistency of Space-based Vegetation Height Maps for
Forest Dynamics in Alpine Terrain [18.23260742076316]
The Swiss National Forest Inventory (NFI) provides countrywide vegetation height maps at a spatial resolution of 0.5 m.
This can be improved by using spaceborne remote sensing and deep learning to generate large-scale vegetation height maps.
We generate annual, countrywide vegetation height maps at a 10-meter ground sampling distance for the years 2017 to 2020 based on Sentinel-2 satellite imagery.
arXiv Detail & Related papers (2023-09-04T20:23:57Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - Sub-Meter Tree Height Mapping of California using Aerial Images and
LiDAR-Informed U-Net Model [0.0]
Tree canopy height is one of the most important indicators of forest biomass, productivity, and species diversity.
Here, we used a U-Net model adapted for regression to map the canopy height of all trees in the state of California with very high-resolution aerial imagery.
Our model successfully estimated canopy heights up to 50 m without saturation, outperforming existing canopy height products from global models.
arXiv Detail & Related papers (2023-06-02T22:29:58Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Very high resolution canopy height maps from RGB imagery using
self-supervised vision transformer and convolutional decoder trained on
Aerial Lidar [14.07306593230776]
This paper presents the first high-resolution canopy height map concurrently produced for multiple sub-national jurisdictions.
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020.
We also introduce a post-processing step using a convolutional network trained on GEDI observations.
arXiv Detail & Related papers (2023-04-14T15:52:57Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.