3D-SAR Tomography and Machine Learning for High-Resolution Tree Height Estimation
- URL: http://arxiv.org/abs/2409.05636v1
- Date: Mon, 9 Sep 2024 14:07:38 GMT
- Title: 3D-SAR Tomography and Machine Learning for High-Resolution Tree Height Estimation
- Authors: Grace Colverd, Jumpei Takami, Laura Schade, Karol Bot, Joseph A. Gallego-Mejia,
- Abstract summary: Tree height, a key factor in biomass calculations, can be measured using Synthetic Aperture Radar (SAR) technology.
This study applies machine learning to extract forest height data from two SAR products.
We use the TomoSense dataset, containing SAR and LiDAR data from Germany's Eifel National Park, to develop and evaluate height estimation models.
- Score: 4.1942958779358674
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurately estimating forest biomass is crucial for global carbon cycle modelling and climate change mitigation. Tree height, a key factor in biomass calculations, can be measured using Synthetic Aperture Radar (SAR) technology. This study applies machine learning to extract forest height data from two SAR products: Single Look Complex (SLC) images and tomographic cubes, in preparation for the ESA Biomass Satellite mission. We use the TomoSense dataset, containing SAR and LiDAR data from Germany's Eifel National Park, to develop and evaluate height estimation models. Our approach includes classical methods, deep learning with a 3D U-Net, and Bayesian-optimized techniques. By testing various SAR frequencies and polarimetries, we establish a baseline for future height and biomass modelling. Best-performing models predict forest height to be within 2.82m mean absolute error for canopies around 30m, advancing our ability to measure global carbon stocks and support climate action.
Related papers
- Multi-modal classification of forest biodiversity potential from 2D orthophotos and 3D airborne laser scanning point clouds [47.679877727066206]
This study investigates whether deep learning-based fusion of close-range sensing data from 2D orthophotos and 3D airborne laser scanning (ALS) point clouds can enhance biodiversity assessment.
We introduce the BioVista dataset, comprising 44.378 paired samples of orthophotos and ALS point clouds from temperate forests in Denmark.
Using deep neural networks (ResNet for orthophotos and PointResNet for ALS point clouds), we investigate each data modality's ability to assess forest biodiversity potential, achieving mean accuracies of 69.4% and 72.8%, respectively.
arXiv Detail & Related papers (2025-01-03T09:42:25Z) - Tomographic SAR Reconstruction for Forest Height Estimation [4.1942958779358674]
Tree height estimation serves as an important proxy for biomass estimation in ecological and forestry applications.
In this study, we use deep learning to estimate forest canopy height directly from 2D Single Look Complex (SLC) images, a derivative of Synthetic Aperture Radar (SAR)
Our method attempts to bypass traditional tomographic signal processing, potentially reducing latency from SAR capture to end product.
arXiv Detail & Related papers (2024-12-01T17:37:25Z) - Machine Learning for Methane Detection and Quantification from Space -- A survey [49.7996292123687]
Methane (CH_4) is a potent anthropogenic greenhouse gas, contributing 86 times more to global warming than Carbon Dioxide (CO_2) over 20 years.
This work expands existing information on operational methane point source detection sensors in the Short-Wave Infrared (SWIR) bands.
It reviews the state-of-the-art for traditional as well as Machine Learning (ML) approaches.
arXiv Detail & Related papers (2024-08-27T15:03:20Z) - Unified Deep Learning Model for Global Prediction of Aboveground Biomass, Canopy Height and Cover from High-Resolution, Multi-Sensor Satellite Imagery [0.196629787330046]
We present a new methodology which uses multi-sensor, multi-spectral imagery of 10 meters and a deep learning based model which unifies the prediction of above ground biomass density (AGBD), canopy height (CH), canopy cover (CC)
The model is trained on millions of globally sampled GEDI-L2/L4 measurements. We validate the capability of our model by deploying it over the entire globe for the year 2023 as well as annually from 2016 to 2023 over selected areas.
arXiv Detail & Related papers (2024-08-20T23:15:41Z) - FLOGA: A machine learning ready dataset, a benchmark and a novel deep
learning model for burnt area mapping with Sentinel-2 [41.28284355136163]
Wildfires pose significant threats to human and animal lives, ecosystems, and socio-economic stability.
In this work, we create and introduce a machine-learning ready dataset we name FLOGA (Forest wiLdfire Observations for the Greek Area)
This dataset is unique as it comprises of satellite imagery acquired before and after a wildfire event.
We use FLOGA to provide a thorough comparison of multiple Machine Learning and Deep Learning algorithms for the automatic extraction of burnt areas.
arXiv Detail & Related papers (2023-11-06T18:42:05Z) - Estimation of forest height and biomass from open-access multi-sensor
satellite imagery and GEDI Lidar data: high-resolution maps of metropolitan
France [0.0]
This study uses a machine learning approach that was previously developed to produce local maps of forest parameters.
We used the GEDI Lidar mission as reference height data, and the satellite images from Sentinel-1, Sentinel-2 and ALOS-2 PALSA-2 to estimate forest height.
The height map is then derived into volume and aboveground biomass (AGB) using allometric equations.
arXiv Detail & Related papers (2023-10-23T07:58:49Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Very high resolution canopy height maps from RGB imagery using
self-supervised vision transformer and convolutional decoder trained on
Aerial Lidar [14.07306593230776]
This paper presents the first high-resolution canopy height map concurrently produced for multiple sub-national jurisdictions.
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020.
We also introduce a post-processing step using a convolutional network trained on GEDI observations.
arXiv Detail & Related papers (2023-04-14T15:52:57Z) - Information fusion approach for biomass estimation in a plateau
mountainous forest using a synergistic system comprising UAS-based digital
camera and LiDAR [9.944631732226657]
The objective of this study was to quantify the aboveground biomass (AGB) of a plateau mountainous forest reserve.
We utilized digital aerial photogrammetry (DAP), which has the unique advantages of speed, high spatial resolution, and low cost.
Based on the CHM and spectral attributes obtained from multispectral images, we estimated and mapped the AGB of the region of interest with considerable cost efficiency.
arXiv Detail & Related papers (2022-04-14T04:04:59Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.