Exploiting Digital Surface Models for Inferring Super-Resolution for
Remotely Sensed Images
- URL: http://arxiv.org/abs/2205.04056v1
- Date: Mon, 9 May 2022 06:02:50 GMT
- Title: Exploiting Digital Surface Models for Inferring Super-Resolution for
Remotely Sensed Images
- Authors: Savvas Karatsiolis, Chirag Padubidri and Andreas Kamilaris
- Abstract summary: This paper introduces a novel approach for forcing an SRR model to output realistic remote sensing images.
Instead of relying on feature-space similarities as a perceptual loss, the model considers pixel-level information inferred from the normalized Digital Surface Model (nDSM) of the image.
Based on visual inspection, the inferred super-resolution images exhibit particularly superior quality.
- Score: 2.3204178451683264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the plethora of successful Super-Resolution Reconstruction (SRR)
models applied to natural images, their application to remote sensing imagery
tends to produce poor results. Remote sensing imagery is often more complicated
than natural images and has its peculiarities such as being of lower
resolution, it contains noise, and often depicting large textured surfaces. As
a result, applying non-specialized SRR models on remote sensing imagery results
in artifacts and poor reconstructions. To address these problems, this paper
proposes an architecture inspired by previous research work, introducing a
novel approach for forcing an SRR model to output realistic remote sensing
images: instead of relying on feature-space similarities as a perceptual loss,
the model considers pixel-level information inferred from the normalized
Digital Surface Model (nDSM) of the image. This strategy allows the application
of better-informed updates during the training of the model which sources from
a task (elevation map inference) that is closely related to remote sensing.
Nonetheless, the nDSM auxiliary information is not required during production
and thus the model infers a super-resolution image without any additional data
besides its low-resolution pairs. We assess our model on two remotely sensed
datasets of different spatial resolutions that also contain the DSM pairs of
the images: the DFC2018 dataset and the dataset containing the national Lidar
fly-by of Luxembourg. Based on visual inspection, the inferred super-resolution
images exhibit particularly superior quality. In particular, the results for
the high-resolution DFC2018 dataset are realistic and almost indistinguishable
from the ground truth images.
Related papers
- Towards Realistic Data Generation for Real-World Super-Resolution [58.88039242455039]
RealDGen is an unsupervised learning data generation framework designed for real-world super-resolution.
We develop content and degradation extraction strategies, which are integrated into a novel content-degradation decoupled diffusion model.
Experiments demonstrate that RealDGen excels in generating large-scale, high-quality paired data that mirrors real-world degradations.
arXiv Detail & Related papers (2024-06-11T13:34:57Z) - Semantic Guided Large Scale Factor Remote Sensing Image Super-resolution with Generative Diffusion Prior [13.148815217684277]
Large scale factor super-resolution (SR) algorithms are vital for maximizing the utilization of low-resolution (LR) satellite data captured from orbit.
Existing methods confront challenges in recovering SR images with clear textures and correct ground objects.
We introduce a novel framework, the Semantic Guided Diffusion Model (SGDM), designed for large scale factor remote sensing image super-resolution.
arXiv Detail & Related papers (2024-05-11T16:06:16Z) - RS-Mamba for Large Remote Sensing Image Dense Prediction [58.12667617617306]
We propose the Remote Sensing Mamba (RSM) for dense prediction tasks in large VHR remote sensing images.
RSM is specifically designed to capture the global context of remote sensing images with linear complexity.
Our model achieves better efficiency and accuracy than transformer-based models on large remote sensing images.
arXiv Detail & Related papers (2024-04-03T12:06:01Z) - Learning Dual-Level Deformable Implicit Representation for Real-World Scale Arbitrary Super-Resolution [81.74583887661794]
We build a new real-world super-resolution benchmark with both integer and non-integer scaling factors for the training and evaluation of real-world scale arbitrary super-resolution.
Specifically, we design the appearance embedding and deformation field to handle both image-level and pixel-level deformations caused by real-world degradations.
Our trained model achieves state-of-the-art performance on the RealArbiSR and RealSR benchmarks for real-world scale arbitrary super-resolution.
arXiv Detail & Related papers (2024-03-16T13:44:42Z) - RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model [0.8747606955991705]
This research introduces a two-stage diffusion model methodology for synthesizing high-resolution satellite images from textual prompts.
The pipeline comprises a Low-Resolution Diffusion Model (LRDM) that generates initial images based on text inputs and a Super-Resolution Diffusion Model (SRDM) that refines these images into high-resolution outputs.
arXiv Detail & Related papers (2023-09-03T09:34:49Z) - Single-View Height Estimation with Conditional Diffusion Probabilistic
Models [1.8782750537161614]
We train a generative diffusion model to learn the joint distribution of optical and DSM images as a Markov chain.
This is accomplished by minimizing a denoising score matching objective while being conditioned on the source image to generate realistic high resolution 3D surfaces.
In this paper we experiment with conditional denoising diffusion probabilistic models (DDPM) for height estimation from a single remotely sensed image.
arXiv Detail & Related papers (2023-04-26T00:37:05Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Single Image Internal Distribution Measurement Using Non-Local
Variational Autoencoder [11.985083962982909]
This paper proposes a novel image-specific solution, namely non-local variational autoencoder (textttNLVAE)
textttNLVAE is introduced as a self-supervised strategy that reconstructs high-resolution images using disentangled information from the non-local neighbourhood.
Experimental results from seven benchmark datasets demonstrate the effectiveness of the textttNLVAE model.
arXiv Detail & Related papers (2022-04-02T18:43:55Z) - Sci-Net: a Scale Invariant Model for Building Detection from Aerial
Images [0.0]
We propose a Scale-invariant neural network (Sci-Net) that is able to segment buildings present in aerial images at different spatial resolutions.
Specifically, we modified the U-Net architecture and fused it with dense Atrous Spatial Pyramid Pooling (ASPP) to extract fine-grained multi-scale representations.
arXiv Detail & Related papers (2021-11-12T16:45:20Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.