Fusing Optical and SAR time series for LAI gap filling with multioutput
Gaussian processes
- URL: http://arxiv.org/abs/2012.02998v1
- Date: Sat, 5 Dec 2020 10:36:45 GMT
- Title: Fusing Optical and SAR time series for LAI gap filling with multioutput
Gaussian processes
- Authors: Luca Pipia, Jordi Mu\~noz-Mar\'i, Eatidal Amin, Santiago Belda, Gustau
Camps-Valls, Jochem Verrelst
- Abstract summary: Persistent clouds over agricultural fields can mask key stages of crop growth, leading to unreliable yield predictions.
Synthetic Aperture Radar (SAR) provides all-weather imagery which can potentially overcome this limitation.
We propose the use of Multi-Output Gaussian Process (MOGP) regression, a machine learning technique that learns automatically the statistical relationships among multisensor time series.
- Score: 6.0122901245834015
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The availability of satellite optical information is often hampered by the
natural presence of clouds, which can be problematic for many applications.
Persistent clouds over agricultural fields can mask key stages of crop growth,
leading to unreliable yield predictions. Synthetic Aperture Radar (SAR)
provides all-weather imagery which can potentially overcome this limitation,
but given its high and distinct sensitivity to different surface properties,
the fusion of SAR and optical data still remains an open challenge. In this
work, we propose the use of Multi-Output Gaussian Process (MOGP) regression, a
machine learning technique that learns automatically the statistical
relationships among multisensor time series, to detect vegetated areas over
which the synergy between SAR-optical imageries is profitable. For this
purpose, we use the Sentinel-1 Radar Vegetation Index (RVI) and Sentinel-2 Leaf
Area Index (LAI) time series over a study area in north west of the Iberian
peninsula. Through a physical interpretation of MOGP trained models, we show
its ability to provide estimations of LAI even over cloudy periods using the
information shared with RVI, which guarantees the solution keeps always tied to
real measurements. Results demonstrate the advantage of MOGP especially for
long data gaps, where optical-based methods notoriously fail. The
leave-one-image-out assessment technique applied to the whole vegetation cover
shows MOGP predictions improve standard GP estimations over short-time gaps
(R$^2$ of 74\% vs 68\%, RMSE of 0.4 vs 0.44 $[m^2m^{-2}]$) and especially over
long-time gaps (R$^2$ of 33\% vs 12\%, RMSE of 0.5 vs 1.09 $[m^2m^{-2}]$).
Related papers
- Real-time gravitational-wave inference for binary neutron stars using machine learning [71.29593576787549]
We develop a machine learning approach that performs complete BNS inference in just one second without making any approximations.
Our method scales to extremely long signals, up to an hour in length, thus serving as a blueprint for data analysis for next-generation ground- and space-based detectors.
arXiv Detail & Related papers (2024-07-12T18:00:02Z) - Soil Fertility Prediction Using Combined USB-microscope Based Soil Image, Auxiliary Variables, and Portable X-Ray Fluorescence Spectrometry [3.431158134976364]
The research combined color and texture features from microscopic soil images, PXRF data, and auxiliary soil variables (AVs) using a Random Forest model.
Results indicated that integrating image features (IFs) with auxiliary variables (AVs) significantly enhanced prediction accuracy for available B.
A data fusion approach, incorporating IFs, AVs, and PXRF data, further improved predictions for available Mn and SAI with R2 values of 0.72 and 0.70, respectively.
arXiv Detail & Related papers (2024-04-17T17:57:20Z) - Estimating optical vegetation indices with Sentinel-1 SAR data and
AutoML [32.19783248549554]
Current optical vegetation indices (VIs) for monitoring forest ecosystems are widely used in various applications.
continuous monitoring based on optical satellite data can be hampered by atmospheric effects such as clouds.
The goal of this work is to overcome the issues affecting optical data with SAR data and serve as a substitute for estimating optical VIs for forests using machine learning.
arXiv Detail & Related papers (2023-11-13T18:23:46Z) - Residual Diffusion Modeling for Km-scale Atmospheric Downscaling [51.061954281398116]
A cost-effective downscaling model is trained from a high-resolution 2-km weather model over Taiwan.
textitCorrDiff exhibits skillful RMSE and CRPS and faithfully recovers spectra and distributions even for extremes.
Downscaling global forecasts successfully retains many of these benefits, foreshadowing the potential of end-to-end, global-to-km-scales machine learning weather predictions.
arXiv Detail & Related papers (2023-09-24T19:57:22Z) - Combining multitemporal optical and SAR data for LAI imputation with
BiLSTM network [0.0]
Leaf Area Index (LAI) is vital for predicting winter wheat yield. Acquisition of crop conditions via Sentinel-2 remote sensing images can be hindered by persistent clouds, affecting yield predictions.
This study evaluates the use of time series Sentinel-1 VH/VV for LAI imputation, aiming to increase spatial-temporal density.
We utilize a bidirectional LSTM (BiLSTM) network to impute time series LAI and use half mean squared error for each time step as the loss function.
arXiv Detail & Related papers (2023-07-14T15:59:19Z) - Transforming Observations of Ocean Temperature with a Deep Convolutional
Residual Regressive Neural Network [0.0]
Sea surface temperature (SST) is an essential climate variable that can be measured via ground truth, remote sensing, or hybrid model methodologies.
Here, we celebrate SST surveillance progress via the application of a few relevant technological advances from the late 20th and early 21st century.
We develop our existing water cycle observation framework, Flux to Flow (F2F), to fuse AMSR-E and MODIS into a higher resolution product.
Our neural network architecture is constrained to a deep convolutional residual regressive neural network.
arXiv Detail & Related papers (2023-06-16T17:35:11Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - Solar Flare Index Prediction Using SDO/HMI Vector Magnetic Data Products
with Statistical and Machine Learning Methods [6.205102537396887]
Solar flares, especially the M- and X-class ones, are often associated with coronal mass ejections (CMEs)
Here, we introduce several statistical and Machine Learning approaches to the prediction of the AR's Flare Index (FI) that quantifies the flare productivity of an AR.
arXiv Detail & Related papers (2022-09-28T02:13:33Z) - Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing
Things [82.15959827765325]
We propose a novel approach to multimodal sensor fusion for Ambient Assisted Living (AAL)
We address two major shortcomings of standard multimodal approaches, limited area coverage and reduced reliability.
Our new framework fuses the concept of modality hallucination with triplet learning to train a model with different modalities to handle missing sensors at inference time.
arXiv Detail & Related papers (2022-07-14T10:04:18Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.