Enhancing Sentinel-2 Image Resolution: Evaluating Advanced Techniques based on Convolutional and Generative Neural Networks
- URL: http://arxiv.org/abs/2410.00516v1
- Date: Tue, 1 Oct 2024 08:56:46 GMT
- Title: Enhancing Sentinel-2 Image Resolution: Evaluating Advanced Techniques based on Convolutional and Generative Neural Networks
- Authors: Patrick Kramer, Alexander Steinhardt, Barbara Pedretscher,
- Abstract summary: This paper investigates the enhancement of spatial resolution in Sentinel-2 bands that contain spectral information using advanced super-resolution techniques by a factor of 2.
State-of-the-art CNN models are compared with enhanced GAN approaches in terms of quality and feasibility.
GAN-based models not only provide clear and detailed images, but also demonstrate superior performance in terms of quantitative assessment.
- Score: 44.99833362998488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the enhancement of spatial resolution in Sentinel-2 bands that contain spectral information using advanced super-resolution techniques by a factor of 2. State-of-the-art CNN models are compared with enhanced GAN approaches in terms of quality and feasibility. Therefore, a representative dataset comprising Sentinel-2 low-resolution images and corresponding high-resolution aerial orthophotos is required. Literature study reveals no feasible dataset for the land type of interest (forests), for which reason an adequate dataset had to be generated in addition, accounting for accurate alignment and image source optimization. The results reveal that while CNN-based approaches produce satisfactory outcomes, they tend to yield blurry images. In contrast, GAN-based models not only provide clear and detailed images, but also demonstrate superior performance in terms of quantitative assessment, underlying the potential of the framework beyond the specific land type investigated.
Related papers
- Underwater SONAR Image Classification and Analysis using LIME-based Explainable Artificial Intelligence [0.0]
This paper explores the application of the eXplainable Artificial Intelligence (XAI) tool to interpret the underwater image classification results.
An extensive analysis of transfer learning techniques for image classification using benchmark Convolutional Neural Network (CNN) architectures is carried out.
XAI techniques highlight interpretability of the results in a more human-compliant way, thus boosting our confidence and reliability.
arXiv Detail & Related papers (2024-08-23T04:54:18Z) - S2RC-GCN: A Spatial-Spectral Reliable Contrastive Graph Convolutional Network for Complex Land Cover Classification Using Hyperspectral Images [10.579474650543471]
This study proposes a novel spatial-spectral reliable contrastive graph convolutional classification framework named S2RC-GCN.
Specifically, we fused the spectral and spatial features extracted by the 1D- and 2D-encoder, and the 2D-encoder includes an attention model to automatically extract important information.
We then leveraged the fused high-level features to construct graphs and fed the resulting graphs into the GCNs to determine more effective graph representations.
arXiv Detail & Related papers (2024-04-01T07:17:02Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Efficient texture-aware multi-GAN for image inpainting [5.33024001730262]
Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements.
We propose a multi-GAN architecture improving both the performance and rendering efficiency.
arXiv Detail & Related papers (2020-09-30T14:58:03Z) - Interpretable Detail-Fidelity Attention Network for Single Image
Super-Resolution [89.1947690981471]
We propose a purposeful and interpretable detail-fidelity attention network to progressively process smoothes and details in divide-and-conquer manner.
Particularly, we propose a Hessian filtering for interpretable feature representation which is high-profile for detail inference.
Experiments demonstrate that the proposed methods achieve superior performances over the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-28T08:31:23Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.