Deep Blind Super-Resolution for Satellite Video
- URL: http://arxiv.org/abs/2401.07139v1
- Date: Sat, 13 Jan 2024 18:56:18 GMT
- Title: Deep Blind Super-Resolution for Satellite Video
- Authors: Yi Xiao and Qiangqiang Yuan and Qiang Zhang and Liangpei Zhang
- Abstract summary: This paper proposes a practical Blind SVSR algorithm (BSVSR) to explore more sharp cues by considering the pixel-wise blur levels in a coarse-to-fine manner.
We employ multi-scale deformable convolution to coarsely aggregate the temporal redundancy into adjacent frames by window-slid progressive fusion.
We devise a pyramid spatial transformation module to adjust the solution space of sharp mid-feature, resulting in flexible feature adaptation in multi-level domains.
- Score: 30.82521485327735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent efforts have witnessed remarkable progress in Satellite Video
Super-Resolution (SVSR). However, most SVSR methods usually assume the
degradation is fixed and known, e.g., bicubic downsampling, which makes them
vulnerable in real-world scenes with multiple and unknown degradations. To
alleviate this issue, blind SR has thus become a research hotspot.
Nevertheless, existing approaches are mainly engaged in blur kernel estimation
while losing sight of another critical aspect for VSR tasks: temporal
compensation, especially compensating for blurry and smooth pixels with vital
sharpness from severely degraded satellite videos. Therefore, this paper
proposes a practical Blind SVSR algorithm (BSVSR) to explore more sharp cues by
considering the pixel-wise blur levels in a coarse-to-fine manner.
Specifically, we employed multi-scale deformable convolution to coarsely
aggregate the temporal redundancy into adjacent frames by window-slid
progressive fusion. Then the adjacent features are finely merged into
mid-feature using deformable attention, which measures the blur levels of
pixels and assigns more weights to the informative pixels, thus inspiring the
representation of sharpness. Moreover, we devise a pyramid spatial
transformation module to adjust the solution space of sharp mid-feature,
resulting in flexible feature adaptation in multi-level domains. Quantitative
and qualitative evaluations on both simulated and real-world satellite videos
demonstrate that our BSVSR performs favorably against state-of-the-art
non-blind and blind SR models. Code will be available at
https://github.com/XY-boy/Blind-Satellite-VSR
Related papers
- Arbitrary-Scale Video Super-Resolution with Structural and Textural Priors [80.92195378575671]
We describe a strong baseline for Arbitra-scale super-resolution (AVSR)
We then introduce ST-AVSR by equipping our baseline with a multi-scale structural and textural prior computed from the pre-trained VGG network.
Comprehensive experiments show that ST-AVSR significantly improves super-resolution quality, generalization ability, and inference speed over the state-of-theart.
arXiv Detail & Related papers (2024-07-13T15:27:39Z) - DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image
Super-Resolution [15.694407977871341]
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.
Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels.
We propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR.
arXiv Detail & Related papers (2022-12-15T04:34:57Z) - Blind Super-Resolution for Remote Sensing Images via Conditional
Stochastic Normalizing Flows [14.882417028542855]
We propose a novel blind SR framework based on the normalizing flow (BlindSRSNF) to address the above problems.
BlindSRSNF learns the conditional probability distribution over the high-resolution image space given a low-resolution (LR) image by explicitly optimizing the variational bound on the likelihood.
We show that the proposed algorithm can obtain SR results with excellent visual perception quality on both simulated LR and real-world RSIs.
arXiv Detail & Related papers (2022-10-14T12:37:32Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Blind Image Super-Resolution via Contrastive Representation Learning [41.17072720686262]
We design a contrastive representation learning network that focuses on blind SR of images with multi-modal and spatially variant distributions.
We show that the proposed CRL-SR can handle multi-modal and spatially variant degradation effectively under blind settings.
It also outperforms state-of-the-art SR methods qualitatively and quantitatively.
arXiv Detail & Related papers (2021-07-01T19:34:23Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - DynaVSR: Dynamic Adaptive Blind Video Super-Resolution [60.154204107453914]
DynaVSR is a novel meta-learning-based framework for real-world video SR.
We train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation.
Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin.
arXiv Detail & Related papers (2020-11-09T15:07:32Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - HighRes-net: Recursive Fusion for Multi-Frame Super-Resolution of
Satellite Imagery [55.253395881190436]
Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem.
This is important for satellite monitoring of human impact on the planet.
We present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion.
arXiv Detail & Related papers (2020-02-15T22:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.