Estimating optical vegetation indices and biophysical variables for temperate forests with Sentinel-1 SAR data using machine learning techniques: A case study for Czechia
- URL: http://arxiv.org/abs/2311.07537v2
- Date: Tue, 27 Aug 2024 14:34:26 GMT
- Title: Estimating optical vegetation indices and biophysical variables for temperate forests with Sentinel-1 SAR data using machine learning techniques: A case study for Czechia
- Authors: Daniel Paluba, Bertrand Le Saux, Přemysl Stych,
- Abstract summary: Current optical vegetation indices (VIs) for monitoring forest ecosystems are well established and widely used in various applications.
In contrast, synthetic aperture radar (SAR) data can offer insightful and systematic forest monitoring with complete time series (TS) due to signal penetration through clouds and day and night image acquisitions.
This study aims to address the limitations of optical satellite data by using SAR data as an alternative for estimating optical VIs for forests through machine learning (ML)
In general, up to 240 measurements per year and a spatial resolution of 20 m can be achieved using estimated SAR-based VIs with high accuracy.
- Score: 32.19783248549554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current optical vegetation indices (VIs) for monitoring forest ecosystems are well established and widely used in various applications, but can be limited by atmospheric effects such as clouds. In contrast, synthetic aperture radar (SAR) data can offer insightful and systematic forest monitoring with complete time series (TS) due to signal penetration through clouds and day and night image acquisitions. This study aims to address the limitations of optical satellite data by using SAR data as an alternative for estimating optical VIs for forests through machine learning (ML). While this approach is less direct and likely only feasible through the power of ML, it raises the scientific question of whether enough relevant information is contained in the SAR signal to accurately estimate VIs. This work covers the estimation of TS of four VIs (LAI, FAPAR, EVI and NDVI) using multitemporal Sentinel-1 SAR and ancillary data. The study focused on both healthy and disturbed temperate forest areas in Czechia for the year 2021, while ground truth labels generated from Sentinel-2 multispectral data. This was enabled by creating a paired multi-modal TS dataset in Google Earth Engine (GEE), including temporally and spatially aligned Sentinel-1, Sentinel-2, DEM, weather and land cover datasets. The inclusion of DEM-derived auxiliary features and additional meteorological information, further improved the results. In the comparison of ML models, the traditional ML algorithms, RFR and XGBoost slightly outperformed the AutoML approach, auto-sklearn, for all VIs, achieving high accuracies ($R^2$ between 70-86%) and low errors (0.055-0.29 of MAE). In general, up to 240 measurements per year and a spatial resolution of 20 m can be achieved using estimated SAR-based VIs with high accuracy. A great advantage of the SAR-based VI is the ability to detect abrupt forest changes with sub-weekly temporal accuracy.
Related papers
- Comparing remote sensing-based forest biomass mapping approaches using new forest inventory plots in contrasting forests in northeastern and southwestern China [6.90293949599626]
Large-scale high spatial resolution aboveground biomass (AGB) maps play a crucial role in determining forest carbon stocks and how they are changing.
GEDI is a sampling instrument, collecting dispersed footprints, and its data must be combined with that from other continuous cover satellites to create high-resolution maps.
We developed local models to estimate forest AGB from GEDI L2A data, as the models used to create GEDI L4 AGB data incorporated minimal field data from China.
arXiv Detail & Related papers (2024-05-24T11:10:58Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Creating and Leveraging a Synthetic Dataset of Cloud Optical Thickness Measures for Cloud Detection in MSI [3.4764766275808583]
Cloud formations often obscure optical satellite-based monitoring of the Earth's surface.
We propose a novel synthetic dataset for cloud optical thickness estimation.
We leverage for obtaining reliable and versatile cloud masks on real data.
arXiv Detail & Related papers (2023-11-23T14:28:28Z) - Combining multitemporal optical and SAR data for LAI imputation with
BiLSTM network [0.0]
Leaf Area Index (LAI) is vital for predicting winter wheat yield. Acquisition of crop conditions via Sentinel-2 remote sensing images can be hindered by persistent clouds, affecting yield predictions.
This study evaluates the use of time series Sentinel-1 VH/VV for LAI imputation, aiming to increase spatial-temporal density.
We utilize a bidirectional LSTM (BiLSTM) network to impute time series LAI and use half mean squared error for each time step as the loss function.
arXiv Detail & Related papers (2023-07-14T15:59:19Z) - Imbalanced Aircraft Data Anomaly Detection [103.01418862972564]
Anomaly detection in temporal data from sensors under aviation scenarios is a practical but challenging task.
We propose a Graphical Temporal Data Analysis framework.
It consists three modules, named Series-to-Image (S2I), Cluster-based Resampling Approach using Euclidean Distance (CRD) and Variance-Based Loss (VBL)
arXiv Detail & Related papers (2023-05-17T09:37:07Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Fusing Optical and SAR time series for LAI gap filling with multioutput
Gaussian processes [6.0122901245834015]
Persistent clouds over agricultural fields can mask key stages of crop growth, leading to unreliable yield predictions.
Synthetic Aperture Radar (SAR) provides all-weather imagery which can potentially overcome this limitation.
We propose the use of Multi-Output Gaussian Process (MOGP) regression, a machine learning technique that learns automatically the statistical relationships among multisensor time series.
arXiv Detail & Related papers (2020-12-05T10:36:45Z) - SpaceNet 6: Multi-Sensor All Weather Mapping Dataset [13.715388432549373]
We present an open Multi-Sensor All Weather Mapping (MSAW) dataset and challenge.
MSAW covers 120 km2 over multiple overlapping collects and is annotated with over 48,000 unique building footprints labels.
We present a baseline and benchmark for building footprint extraction with SAR data and find that state-of-the-art segmentation models pre-trained on optical data, and then trained on SAR.
arXiv Detail & Related papers (2020-04-14T13:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.