Estimating optical vegetation indices with Sentinel-1 SAR data and
AutoML
- URL: http://arxiv.org/abs/2311.07537v1
- Date: Mon, 13 Nov 2023 18:23:46 GMT
- Title: Estimating optical vegetation indices with Sentinel-1 SAR data and
AutoML
- Authors: Daniel Paluba, Bertrand Le Saux, Francesco Sarti, P\v{r}emysl Stych
- Abstract summary: Current optical vegetation indices (VIs) for monitoring forest ecosystems are widely used in various applications.
continuous monitoring based on optical satellite data can be hampered by atmospheric effects such as clouds.
The goal of this work is to overcome the issues affecting optical data with SAR data and serve as a substitute for estimating optical VIs for forests using machine learning.
- Score: 32.19783248549554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current optical vegetation indices (VIs) for monitoring forest ecosystems are
widely used in various applications. However, continuous monitoring based on
optical satellite data can be hampered by atmospheric effects such as clouds.
On the contrary, synthetic aperture radar (SAR) data can offer insightful and
systematic forest monitoring with complete time series due to signal
penetration through clouds and day and night acquisitions. The goal of this
work is to overcome the issues affecting optical data with SAR data and serve
as a substitute for estimating optical VIs for forests using machine learning.
Time series of four VIs (LAI, FAPAR, EVI and NDVI) were estimated using
multitemporal Sentinel-1 SAR and ancillary data. This was enabled by creating a
paired multi-temporal and multi-modal dataset in Google Earth Engine (GEE),
including temporally and spatially aligned Sentinel-1, Sentinel-2, digital
elevation model (DEM), weather and land cover datasets (MMT-GEE). The use of
ancillary features generated from DEM and weather data improved the results.
The open-source Automatic Machine Learning (AutoML) approach, auto-sklearn,
outperformed Random Forest Regression for three out of four VIs, while a 1-hour
optimization length was enough to achieve sufficient results with an R2 of
69-84% low errors (0.05-0.32 of MAE depending on VI). Great agreement was also
found for selected case studies in the time series analysis and in the spatial
comparison between the original and estimated SAR-based VIs. In general,
compared to VIs from currently freely available optical satellite data and
available global VI products, a better temporal resolution (up to 240
measurements/year) and a better spatial resolution (20 m) were achieved using
estimated SAR-based VIs. A great advantage of the SAR-based VI is the ability
to detect abrupt forest changes with a sub-weekly temporal accuracy.
Related papers
- WV-Net: A foundation model for SAR WV-mode satellite imagery trained using contrastive self-supervised learning on 10 million images [23.653151006898327]
This study uses nearly 10 million WV-mode images and contrastive self-supervised learning to train a semantic embedding model called WV-Net.
In multiple downstream tasks, WV-Net outperforms a comparable model that was pre-trained on natural images with supervised learning.
WV-Net embeddings are also superior in an unsupervised image-retrieval task and scale better in data-sparse settings.
arXiv Detail & Related papers (2024-06-26T21:30:41Z) - Creating and Leveraging a Synthetic Dataset of Cloud Optical Thickness Measures for Cloud Detection in MSI [3.4764766275808583]
Cloud formations often obscure optical satellite-based monitoring of the Earth's surface.
We propose a novel synthetic dataset for cloud optical thickness estimation.
We leverage for obtaining reliable and versatile cloud masks on real data.
arXiv Detail & Related papers (2023-11-23T14:28:28Z) - Combining multitemporal optical and SAR data for LAI imputation with
BiLSTM network [0.0]
Leaf Area Index (LAI) is vital for predicting winter wheat yield. Acquisition of crop conditions via Sentinel-2 remote sensing images can be hindered by persistent clouds, affecting yield predictions.
This study evaluates the use of time series Sentinel-1 VH/VV for LAI imputation, aiming to increase spatial-temporal density.
We utilize a bidirectional LSTM (BiLSTM) network to impute time series LAI and use half mean squared error for each time step as the loss function.
arXiv Detail & Related papers (2023-07-14T15:59:19Z) - Imbalanced Aircraft Data Anomaly Detection [103.01418862972564]
Anomaly detection in temporal data from sensors under aviation scenarios is a practical but challenging task.
We propose a Graphical Temporal Data Analysis framework.
It consists three modules, named Series-to-Image (S2I), Cluster-based Resampling Approach using Euclidean Distance (CRD) and Variance-Based Loss (VBL)
arXiv Detail & Related papers (2023-05-17T09:37:07Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Recurrent Vision Transformers for Object Detection with Event Cameras [62.27246562304705]
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras.
RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection.
Our study brings new insights into effective design choices that can be fruitful for research beyond event-based vision.
arXiv Detail & Related papers (2022-12-11T20:28:59Z) - A Novel Transformer Network with Shifted Window Cross-Attention for
Spatiotemporal Weather Forecasting [5.414308305392762]
We tackle the challenge of weather forecasting using a video transformer network.
Vision transformer architectures have been explored in various applications.
We propose the use of Video Swin-Transformer, coupled with a dedicated augmentation scheme.
arXiv Detail & Related papers (2022-08-02T05:04:53Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Fusing Optical and SAR time series for LAI gap filling with multioutput
Gaussian processes [6.0122901245834015]
Persistent clouds over agricultural fields can mask key stages of crop growth, leading to unreliable yield predictions.
Synthetic Aperture Radar (SAR) provides all-weather imagery which can potentially overcome this limitation.
We propose the use of Multi-Output Gaussian Process (MOGP) regression, a machine learning technique that learns automatically the statistical relationships among multisensor time series.
arXiv Detail & Related papers (2020-12-05T10:36:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.