Estimating optical vegetation indices with Sentinel-1 SAR data and
AutoML
- URL: http://arxiv.org/abs/2311.07537v1
- Date: Mon, 13 Nov 2023 18:23:46 GMT
- Title: Estimating optical vegetation indices with Sentinel-1 SAR data and
AutoML
- Authors: Daniel Paluba, Bertrand Le Saux, Francesco Sarti, P\v{r}emysl Stych
- Abstract summary: Current optical vegetation indices (VIs) for monitoring forest ecosystems are widely used in various applications.
continuous monitoring based on optical satellite data can be hampered by atmospheric effects such as clouds.
The goal of this work is to overcome the issues affecting optical data with SAR data and serve as a substitute for estimating optical VIs for forests using machine learning.
- Score: 32.19783248549554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current optical vegetation indices (VIs) for monitoring forest ecosystems are
widely used in various applications. However, continuous monitoring based on
optical satellite data can be hampered by atmospheric effects such as clouds.
On the contrary, synthetic aperture radar (SAR) data can offer insightful and
systematic forest monitoring with complete time series due to signal
penetration through clouds and day and night acquisitions. The goal of this
work is to overcome the issues affecting optical data with SAR data and serve
as a substitute for estimating optical VIs for forests using machine learning.
Time series of four VIs (LAI, FAPAR, EVI and NDVI) were estimated using
multitemporal Sentinel-1 SAR and ancillary data. This was enabled by creating a
paired multi-temporal and multi-modal dataset in Google Earth Engine (GEE),
including temporally and spatially aligned Sentinel-1, Sentinel-2, digital
elevation model (DEM), weather and land cover datasets (MMT-GEE). The use of
ancillary features generated from DEM and weather data improved the results.
The open-source Automatic Machine Learning (AutoML) approach, auto-sklearn,
outperformed Random Forest Regression for three out of four VIs, while a 1-hour
optimization length was enough to achieve sufficient results with an R2 of
69-84% low errors (0.05-0.32 of MAE depending on VI). Great agreement was also
found for selected case studies in the time series analysis and in the spatial
comparison between the original and estimated SAR-based VIs. In general,
compared to VIs from currently freely available optical satellite data and
available global VI products, a better temporal resolution (up to 240
measurements/year) and a better spatial resolution (20 m) were achieved using
estimated SAR-based VIs. A great advantage of the SAR-based VI is the ability
to detect abrupt forest changes with a sub-weekly temporal accuracy.
Related papers
- TESSERA: Temporal Embeddings of Surface Spectra for Earth Representation and Analysis [0.2479153065703935]
We present TESSERA, an open, global, land-oriented remote sensing foundation model.<n>We use two parallel Transformer-based encoders to combine optical data from ten Sentinel-2 spectral bands at 10-60m spatial resolution and two Sentinel-1 synthetic aperture radar back coefficients at 10m resolution to create embeddings that are subsequently fused with a multilayer perceptron to create annual global embedding maps.
arXiv Detail & Related papers (2025-06-25T12:46:26Z) - Comparing remote sensing-based forest biomass mapping approaches using new forest inventory plots in contrasting forests in northeastern and southwestern China [6.90293949599626]
Large-scale high spatial resolution aboveground biomass (AGB) maps play a crucial role in determining forest carbon stocks and how they are changing.
GEDI is a sampling instrument, collecting dispersed footprints, and its data must be combined with that from other continuous cover satellites to create high-resolution maps.
We developed local models to estimate forest AGB from GEDI L2A data, as the models used to create GEDI L4 AGB data incorporated minimal field data from China.
arXiv Detail & Related papers (2024-05-24T11:10:58Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Creating and Leveraging a Synthetic Dataset of Cloud Optical Thickness Measures for Cloud Detection in MSI [3.4764766275808583]
Cloud formations often obscure optical satellite-based monitoring of the Earth's surface.
We propose a novel synthetic dataset for cloud optical thickness estimation.
We leverage for obtaining reliable and versatile cloud masks on real data.
arXiv Detail & Related papers (2023-11-23T14:28:28Z) - Combining multitemporal optical and SAR data for LAI imputation with
BiLSTM network [0.0]
Leaf Area Index (LAI) is vital for predicting winter wheat yield. Acquisition of crop conditions via Sentinel-2 remote sensing images can be hindered by persistent clouds, affecting yield predictions.
This study evaluates the use of time series Sentinel-1 VH/VV for LAI imputation, aiming to increase spatial-temporal density.
We utilize a bidirectional LSTM (BiLSTM) network to impute time series LAI and use half mean squared error for each time step as the loss function.
arXiv Detail & Related papers (2023-07-14T15:59:19Z) - Imbalanced Aircraft Data Anomaly Detection [103.01418862972564]
Anomaly detection in temporal data from sensors under aviation scenarios is a practical but challenging task.
We propose a Graphical Temporal Data Analysis framework.
It consists three modules, named Series-to-Image (S2I), Cluster-based Resampling Approach using Euclidean Distance (CRD) and Variance-Based Loss (VBL)
arXiv Detail & Related papers (2023-05-17T09:37:07Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Fusing Optical and SAR time series for LAI gap filling with multioutput
Gaussian processes [6.0122901245834015]
Persistent clouds over agricultural fields can mask key stages of crop growth, leading to unreliable yield predictions.
Synthetic Aperture Radar (SAR) provides all-weather imagery which can potentially overcome this limitation.
We propose the use of Multi-Output Gaussian Process (MOGP) regression, a machine learning technique that learns automatically the statistical relationships among multisensor time series.
arXiv Detail & Related papers (2020-12-05T10:36:45Z) - SpaceNet 6: Multi-Sensor All Weather Mapping Dataset [13.715388432549373]
We present an open Multi-Sensor All Weather Mapping (MSAW) dataset and challenge.
MSAW covers 120 km2 over multiple overlapping collects and is annotated with over 48,000 unique building footprints labels.
We present a baseline and benchmark for building footprint extraction with SAR data and find that state-of-the-art segmentation models pre-trained on optical data, and then trained on SAR.
arXiv Detail & Related papers (2020-04-14T13:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.