Adaptive Blind Super-Resolution Network for Spatial-Specific and Spatial-Agnostic Degradations
- URL: http://arxiv.org/abs/2506.07705v1
- Date: Mon, 09 Jun 2025 12:48:17 GMT
- Title: Adaptive Blind Super-Resolution Network for Spatial-Specific and Spatial-Agnostic Degradations
- Authors: Weilei Wen, Chunle Guo, Wenqi Ren, Hongpeng Wang, Xiuli Shao,
- Abstract summary: Degradation modalities, including sampling, blurring, and noise, can be roughly categorized into two classes.<n>We introduce a dynamic filter network integrating global and local branches to address these two degradation types.<n>Our proposed method outperforms state-of-the-art blind super-resolution algorithms in both synthetic and real image datasets.
- Score: 43.240960016869025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior methodologies have disregarded the diversities among distinct degradation types during image reconstruction, employing a uniform network model to handle multiple deteriorations. Nevertheless, we discover that prevalent degradation modalities, including sampling, blurring, and noise, can be roughly categorized into two classes. We classify the first class as spatial-agnostic dominant degradations, less affected by regional changes in image space, such as downsampling and noise degradation. The second class degradation type is intimately associated with the spatial position of the image, such as blurring, and we identify them as spatial-specific dominant degradations. We introduce a dynamic filter network integrating global and local branches to address these two degradation types. This network can greatly alleviate the practical degradation problem. Specifically, the global dynamic filtering layer can perceive the spatial-agnostic dominant degradation in different images by applying weights generated by the attention mechanism to multiple parallel standard convolution kernels, enhancing the network's representation ability. Meanwhile, the local dynamic filtering layer converts feature maps of the image into a spatially specific dynamic filtering operator, which performs spatially specific convolution operations on the image features to handle spatial-specific dominant degradations. By effectively integrating both global and local dynamic filtering operators, our proposed method outperforms state-of-the-art blind super-resolution algorithms in both synthetic and real image datasets.
Related papers
- Generalization-aware Remote Sensing Change Detection via Domain-agnostic Learning [40.762693311584144]
We present a generalizable domain-agnostic difference learning network (DonaNet) for change detection.<n>DonaNet learns domain-agnostic representations by removing domain-specific style of encoded features and highlighting the class characteristics of objects.<n>In the highlighting, we propose a cross-temporal generalization learning strategy to imitate latent domain shifts.
arXiv Detail & Related papers (2025-04-01T08:51:16Z) - UniUIR: Considering Underwater Image Restoration as An All-in-One Learner [49.35128836844725]
We propose a Universal Underwater Image Restoration method, termed as UniUIR.<n>To decouple degradation-specific issues and explore the inter-correlations among various degradations in UIR task, we designed the Mamba Mixture-of-Experts module.<n>This module extracts degradation prior information in both spatial and frequency domains, and adaptively selects the most appropriate task-specific prompts.
arXiv Detail & Related papers (2025-01-22T16:10:42Z) - Boosting Visual Recognition in Real-world Degradations via Unsupervised Feature Enhancement Module with Deep Channel Prior [22.323789227447755]
Fog, low-light, and motion blur degrade image quality and pose threats to the safety of autonomous driving.
This work proposes a novel Deep Channel Prior (DCP) for degraded visual recognition.
Based on this, a novel plug-and-play Unsupervised Feature Enhancement Module (UFEM) is proposed to achieve unsupervised feature correction.
arXiv Detail & Related papers (2024-04-02T07:16:56Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Efficient and Explicit Modelling of Image Hierarchies for Image
Restoration [120.35246456398738]
We propose a mechanism to efficiently and explicitly model image hierarchies in the global, regional, and local range for image restoration.
Inspired by that, we propose the anchored stripe self-attention which achieves a good balance between the space and time complexity of self-attention.
Then we propose a new network architecture dubbed GRL to explicitly model image hierarchies in the Global, Regional, and Local range.
arXiv Detail & Related papers (2023-03-01T18:59:29Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.