Blind Image Super-resolution with Rich Texture-Aware Codebooks
- URL: http://arxiv.org/abs/2310.17188v1
- Date: Thu, 26 Oct 2023 07:00:18 GMT
- Title: Blind Image Super-resolution with Rich Texture-Aware Codebooks
- Authors: Rui Qin, Ming Sun, Fangyuan Zhang, Xing Wen, Bin Wang
- Abstract summary: Blind super-resolution (BSR) methods based on high-resolution (HR) reconstruction codebooks have achieved promising results in recent years.
We find that a codebook based on HR reconstruction may not effectively capture the complex correlations between low-resolution (LR) and HR images.
We propose the Rich Texture-aware Codebook-based Network (RTCNet), which consists of the Degradation-robust Texture Prior Module (DTPM) and the Patch-aware Texture Prior Module (PTPM)
- Score: 12.608418657067947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Blind super-resolution (BSR) methods based on high-resolution (HR)
reconstruction codebooks have achieved promising results in recent years.
However, we find that a codebook based on HR reconstruction may not effectively
capture the complex correlations between low-resolution (LR) and HR images. In
detail, multiple HR images may produce similar LR versions due to complex blind
degradations, causing the HR-dependent only codebooks having limited texture
diversity when faced with confusing LR inputs. To alleviate this problem, we
propose the Rich Texture-aware Codebook-based Network (RTCNet), which consists
of the Degradation-robust Texture Prior Module (DTPM) and the Patch-aware
Texture Prior Module (PTPM). DTPM effectively mines the cross-resolution
correlation of textures between LR and HR images by exploiting the
cross-resolution correspondence of textures. PTPM uses patch-wise semantic
pre-training to correct the misperception of texture similarity in the
high-level semantic regularization. By taking advantage of this, RTCNet
effectively gets rid of the misalignment of confusing textures between HR and
LR in the BSR scenarios. Experiments show that RTCNet outperforms
state-of-the-art methods on various benchmarks by up to 0.16 ~ 0.46dB.
Related papers
- Enhanced Super-Resolution Training via Mimicked Alignment for Real-World Scenes [51.92255321684027]
We propose a novel plug-and-play module designed to mitigate misalignment issues by aligning LR inputs with HR images during training.
Specifically, our approach involves mimicking a novel LR sample that aligns with HR while preserving the characteristics of the original LR samples.
We comprehensively evaluate our method on synthetic and real-world datasets, demonstrating its effectiveness across a spectrum of SR models.
arXiv Detail & Related papers (2024-10-07T18:18:54Z) - A Feature Reuse Framework with Texture-adaptive Aggregation for
Reference-based Super-Resolution [29.57364804554312]
Reference-based super-resolution (RefSR) has gained considerable success in the field of super-resolution.
We propose a feature reuse framework that guides the step-by-step texture reconstruction process.
We introduce a single image feature embedding module and a texture-adaptive aggregation module.
arXiv Detail & Related papers (2023-06-02T12:49:22Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - Reference-based Image Super-Resolution with Deformable Attention
Transformer [62.71769634254654]
RefSR aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
This paper proposes a deformable attention Transformer, namely DATSR, with multiple scales.
Experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-07-25T07:07:00Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single
Image Super-Resolution and Beyond [75.37541439447314]
Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version.
This paper proposes a linearly-assembled pixel-adaptive regression network (LAPAR) to strike a sweet spot of deep model complexity and resulting SISR quality.
arXiv Detail & Related papers (2021-05-21T15:47:18Z) - Cross-Scale Internal Graph Neural Network for Image Super-Resolution [147.77050877373674]
Non-local self-similarity in natural images has been well studied as an effective prior in image restoration.
For single image super-resolution (SISR), most existing deep non-local methods only exploit similar patches within the same scale of the low-resolution (LR) input image.
This is achieved using a novel cross-scale internal graph neural network (IGNN)
arXiv Detail & Related papers (2020-06-30T10:48:40Z) - Learning Texture Transformer Network for Image Super-Resolution [47.86443447491344]
We propose a Texture Transformer Network for Image Super-Resolution (TTSR)
TTSR consists of four closely-related modules optimized for image generation tasks.
TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-06-07T12:55:34Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.