SEN12MS-CR-TS: A Remote Sensing Data Set for Multi-modal Multi-temporal
Cloud Removal
- URL: http://arxiv.org/abs/2201.09613v1
- Date: Mon, 24 Jan 2022 11:38:49 GMT
- Title: SEN12MS-CR-TS: A Remote Sensing Data Set for Multi-modal Multi-temporal
Cloud Removal
- Authors: Patrick Ebel and Yajin Xu and Michael Schmitt and Xiaoxiang Zhu
- Abstract summary: About half of all optical observations collected via spaceborne satellites are affected by haze or clouds.
This work addresses the challenge of optical satellite image reconstruction and cloud removal by proposing SEN12MS-CR-TS.
We propose two models highlighting the benefits and use cases of SEN12MS-CR-TS.
- Score: 15.459106705735376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: About half of all optical observations collected via spaceborne satellites
are affected by haze or clouds. Consequently, cloud coverage affects the remote
sensing practitioner's capabilities of a continuous and seamless monitoring of
our planet. This work addresses the challenge of optical satellite image
reconstruction and cloud removal by proposing a novel multi-modal and
multi-temporal data set called SEN12MS-CR-TS. We propose two models
highlighting the benefits and use cases of SEN12MS-CR-TS: First, a multi-modal
multi-temporal 3D-Convolution Neural Network that predicts a cloud-free image
from a sequence of cloudy optical and radar images. Second, a
sequence-to-sequence translation model that predicts a cloud-free time series
from a cloud-covered time series. Both approaches are evaluated experimentally,
with their respective models trained and tested on SEN12MS-CR-TS. The conducted
experiments highlight the contribution of our data set to the remote sensing
community as well as the benefits of multi-modal and multi-temporal information
to reconstruct noisy information. Our data set is available at
https://patrickTUM.github.io/cloud_removal
Related papers
- TASeg: Temporal Aggregation Network for LiDAR Semantic Segmentation [80.13343299606146]
We propose a Temporal LiDAR Aggregation and Distillation (TLAD) algorithm, which leverages historical priors to assign different aggregation steps for different classes.
To make full use of temporal images, we design a Temporal Image Aggregation and Fusion (TIAF) module, which can greatly expand the camera FOV.
We also develop a Static-Moving Switch Augmentation (SMSA) algorithm, which utilizes sufficient temporal information to enable objects to switch their motion states freely.
arXiv Detail & Related papers (2024-07-13T03:00:16Z) - IDF-CR: Iterative Diffusion Process for Divide-and-Conquer Cloud Removal in Remote-sensing Images [55.40601468843028]
We present an iterative diffusion process for cloud removal (IDF-CR)
IDF-CR is divided into two-stage models that address pixel space and latent space.
In the latent space stage, the diffusion model transforms low-quality cloud removal into high-quality clean output.
arXiv Detail & Related papers (2024-03-18T15:23:48Z) - Cloud gap-filling with deep learning for improved grassland monitoring [2.9272689981427407]
Uninterrupted optical image time series are crucial for the timely monitoring of agricultural land changes.
We propose a deep learning method that integrates cloud-free optical (Sentinel-2) observations and weather-independent (Sentinel-1) Synthetic Aperture Radar (SAR) data.
arXiv Detail & Related papers (2024-03-14T16:41:26Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - U-TILISE: A Sequence-to-sequence Model for Cloud Removal in Optical
Satellite Time Series [22.39321609253005]
We develop a neural model that can map a cloud-masked input sequence to a cloud-free output sequence.
We experimentally evaluate the proposed model on a dataset of satellite Sentinel-2 time series acquired all over Europe.
Compared to a standard baseline, it increases the PSNR by 1.8 dB at previously seen locations and by 1.3 dB at unseen locations.
arXiv Detail & Related papers (2023-05-22T17:37:10Z) - UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical
Satellite Time Series [19.32220113046804]
We introduce UnCRtainTS, a method for multi-temporal cloud removal combining a novel attention-based architecture.
We show how the well-calibrated predicted uncertainties enable a precise control of the reconstruction quality.
arXiv Detail & Related papers (2023-04-11T19:27:18Z) - SatMAE: Pre-training Transformers for Temporal and Multi-Spectral
Satellite Imagery [74.82821342249039]
We present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE)
To leverage temporal information, we include a temporal embedding along with independently masking image patches across time.
arXiv Detail & Related papers (2022-07-17T01:35:29Z) - Multi-Modal Temporal Attention Models for Crop Mapping from Satellite
Time Series [7.379078963413671]
Motivated by the recent success of temporal attention-based methods across multiple crop mapping tasks, we propose to investigate how these models can be adapted to operate on several modalities.
We implement and evaluate multiple fusion schemes, including a novel approach and simple adjustments to the training procedure.
We show that most fusion schemes have advantages and drawbacks, making them relevant for specific settings.
We then evaluate the benefit of multimodality across several tasks: parcel classification, pixel-based segmentation, and panoptic parcel segmentation.
arXiv Detail & Related papers (2021-12-14T17:05:55Z) - Sentinel-1 and Sentinel-2 Spatio-Temporal Data Fusion for Clouds Removal [51.9654625216266]
A novel method for clouds-corrupted optical image restoration has been presented and developed based on a joint data fusion paradigm.
It is worth highlighting that the Sentinel code and the dataset have been implemented from scratch and made available to interested research for further analysis and investigation.
arXiv Detail & Related papers (2021-06-23T08:15:01Z) - Seeing Through Clouds in Satellite Images [14.84582204034532]
This paper presents a neural-network-based solution to recover pixels occluded by clouds in satellite images.
We leverage radio frequency (RF) signals in the ultra/super-high frequency band that penetrate clouds to help reconstruct the occluded regions in multispectral images.
arXiv Detail & Related papers (2021-06-15T20:01:27Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.