Cross-sensor super-resolution of irregularly sampled Sentinel-2 time series
- URL: http://arxiv.org/abs/2404.16409v1
- Date: Thu, 25 Apr 2024 08:36:09 GMT
- Title: Cross-sensor super-resolution of irregularly sampled Sentinel-2 time series
- Authors: Aimi Okabayashi, Nicolas Audebert, Simon Donike, Charlotte Pelletier,
- Abstract summary: We introduce BreizhSR, a new dataset for 4x super-resolution of Sentinel-2 time series.
We show that using multiple images significantly improves super-resolution performance.
We observe a trade-off between spectral fidelity and perceptual quality of the reconstructed HR images.
- Score: 2.9748898344267785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Satellite imaging generally presents a trade-off between the frequency of acquisitions and the spatial resolution of the images. Super-resolution is often advanced as a way to get the best of both worlds. In this work, we investigate multi-image super-resolution of satellite image time series, i.e. how multiple images of the same area acquired at different dates can help reconstruct a higher resolution observation. In particular, we extend state-of-the-art deep single and multi-image super-resolution algorithms, such as SRDiff and HighRes-net, to deal with irregularly sampled Sentinel-2 time series. We introduce BreizhSR, a new dataset for 4x super-resolution of Sentinel-2 time series using very high-resolution SPOT-6 imagery of Brittany, a French region. We show that using multiple images significantly improves super-resolution performance, and that a well-designed temporal positional encoding allows us to perform super-resolution for different times of the series. In addition, we observe a trade-off between spectral fidelity and perceptual quality of the reconstructed HR images, questioning future directions for super-resolution of Earth Observation data.
Related papers
- Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data [0.08192907805418582]
This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation.
One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (ViT) backbone.
The other branch captures complex-temporal dynamics from the Sentinel-2 satellite imageMax time series using a U-ViNet with Temporal Attention (U-TAE)
arXiv Detail & Related papers (2024-10-01T07:50:37Z) - SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution [73.46167948298041]
We propose a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain.
We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions.
It can generate high-resolution images that improve the performance of downstream tasks by 1.7%-7%.
arXiv Detail & Related papers (2023-09-30T15:23:30Z) - Multitemporal and multispectral data fusion for super-resolution of
Sentinel-2 images [11.169492436455423]
DeepSent is a new deep network for super-resolving multitemporal series of Sentinel-2 images.
We show that our solution outperforms other state-of-the-art techniques that realize either multitemporal or multispectral data fusion.
We have applied our method to super-resolve real-world Sentinel-2 images, enhancing the spatial resolution of all the spectral bands to 3.3 m nominal ground sampling distance.
arXiv Detail & Related papers (2023-01-26T15:01:25Z) - MuS2: A Benchmark for Sentinel-2 Multi-Image Super-Resolution [6.480645418615952]
Insufficient spatial resolution of satellite imagery, including Sentinel-2 data, is a serious limitation in many practical use cases.
Super-resolution reconstruction is receiving considerable attention from the remote sensing community.
We introduce a new MuS2 benchmark for multi-image super-resolution reconstruction of Sentinel-2 images.
arXiv Detail & Related papers (2022-10-06T08:29:54Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling [105.69197687940505]
We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
arXiv Detail & Related papers (2022-04-14T17:07:33Z) - Multi-Spectral Multi-Image Super-Resolution of Sentinel-2 with
Radiometric Consistency Losses and Its Effect on Building Delineation [23.025397327720874]
We present the first results of applying multi-image super-resolution (MISR) to multi-spectral remote sensing imagery.
We show that MISR is superior to single-image super-resolution and other baselines on a range of image fidelity metrics.
arXiv Detail & Related papers (2021-11-05T02:49:04Z) - TWIST-GAN: Towards Wavelet Transform and Transferred GAN for
Spatio-Temporal Single Image Super Resolution [4.622977798361014]
Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution.
Deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR)
arXiv Detail & Related papers (2021-04-20T22:12:38Z) - Fast and High-Quality Blind Multi-Spectral Image Pansharpening [48.68143888901669]
We propose a fast approach to blind pansharpening and achieve state-of-the-art image reconstruction quality.
To achieve fast blind pansharpening, we decouple the solution of the blur kernel and of the HRMS image.
Our algorithm outperforms state-of-the-art model-based counterparts in terms of both computational time and reconstruction quality.
arXiv Detail & Related papers (2021-03-17T23:12:14Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - HighRes-net: Recursive Fusion for Multi-Frame Super-Resolution of
Satellite Imagery [55.253395881190436]
Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem.
This is important for satellite monitoring of human impact on the planet.
We present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion.
arXiv Detail & Related papers (2020-02-15T22:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.