Content-decoupled Contrastive Learning-based Implicit Degradation Modeling for Blind Image Super-Resolution
- URL: http://arxiv.org/abs/2408.05440v1
- Date: Sat, 10 Aug 2024 04:51:43 GMT
- Title: Content-decoupled Contrastive Learning-based Implicit Degradation Modeling for Blind Image Super-Resolution
- Authors: Jiang Yuan, Ji Ma, Bo Wang, Weiming Hu,
- Abstract summary: Implicit degradation modeling-based blind super-resolution (SR) has attracted more increasing attention in the community.
We propose a new Content-decoupled Contrastive Learning-based blind image super-resolution (CdCL) framework.
- Score: 33.16889233975723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit degradation modeling-based blind super-resolution (SR) has attracted more increasing attention in the community due to its excellent generalization to complex degradation scenarios and wide application range. How to extract more discriminative degradation representations and fully adapt them to specific image features is the key to this task. In this paper, we propose a new Content-decoupled Contrastive Learning-based blind image super-resolution (CdCL) framework following the typical blind SR pipeline. This framework introduces negative-free contrastive learning technique for the first time to model the implicit degradation representation, in which a new cyclic shift sampling strategy is designed to ensure decoupling between content features and degradation features from the data perspective, thereby improving the purity and discriminability of the learned implicit degradation space. In addition, to improve the efficiency and effectiveness of implicit degradation-based blind super-resolving, we design a detail-aware implicit degradation adaption module with lower complexity, which adapts degradation information to the specific LR image from both channel and spatial perspectives. Extensive experiments on synthetic and real data prove that the proposed CdCL comprehensively improves the quantitative and qualitative results of contrastive learning-based implicit blind SR paradigm, and achieves SOTA PSNR in this field. Even if the number of parameters is halved, our method still achieves very competitive results.
Related papers
- Degradation Oriented and Regularized Network for Blind Depth Super-Resolution [48.744290794713905]
In real-world scenarios, captured depth data often suffer from unconventional and unknown degradation due to sensor limitations and complex imaging environments.
We propose the Degradation Oriented and Regularized Network (DORNet), a novel framework designed to adaptively address unknown degradation in real-world scenes.
Our approach begins with the development of a self-supervised degradation learning strategy, which models the degradation representations of low-resolution depth data.
To facilitate effective RGB-D fusion, we further introduce a degradation-oriented feature transformation module that selectively propagates RGB content into the depth data based on the learned degradation priors.
arXiv Detail & Related papers (2024-10-15T14:53:07Z) - Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors [75.24313405671433]
Diffusion-based image super-resolution (SR) methods have achieved remarkable success by leveraging large pre-trained text-to-image diffusion models as priors.
We introduce a novel one-step SR model, which significantly addresses the efficiency issue of diffusion-based SR methods.
Unlike existing fine-tuning strategies, we designed a degradation-guided Low-Rank Adaptation (LoRA) module specifically for SR.
arXiv Detail & Related papers (2024-09-25T16:15:21Z) - Preserving Full Degradation Details for Blind Image Super-Resolution [40.152015542099704]
We propose an alternative to learn degradation representations through reproducing degraded low-resolution (LR) images.
By guiding the degrader to reconstruct input LR images, full degradation information can be encoded into the representations.
Experiments show that our representations can extract accurate and highly robust degradation information.
arXiv Detail & Related papers (2024-07-01T13:54:59Z) - Suppressing Uncertainties in Degradation Estimation for Blind Super-Resolution [31.89605287039615]
The problem of blind image super-resolution aims to recover high-resolution (HR) images from low-resolution (LR) images with unknown degradation modes.
Most existing methods model the image degradation process using blur kernels.
We propose an textbfUncertainty-based degradation representation for blind textbfSuper-textbfResolution framework.
arXiv Detail & Related papers (2024-06-24T08:58:43Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Bridging Component Learning with Degradation Modelling for Blind Image
Super-Resolution [69.11604249813304]
We propose a components decomposition and co-optimization network (CDCN) for blind SR.
CDCN decomposes the input LR image into structure and detail components in feature space.
We present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process.
arXiv Detail & Related papers (2022-12-03T14:53:56Z) - Blind Super-Resolution for Remote Sensing Images via Conditional
Stochastic Normalizing Flows [14.882417028542855]
We propose a novel blind SR framework based on the normalizing flow (BlindSRSNF) to address the above problems.
BlindSRSNF learns the conditional probability distribution over the high-resolution image space given a low-resolution (LR) image by explicitly optimizing the variational bound on the likelihood.
We show that the proposed algorithm can obtain SR results with excellent visual perception quality on both simulated LR and real-world RSIs.
arXiv Detail & Related papers (2022-10-14T12:37:32Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Uncovering the Over-smoothing Challenge in Image Super-Resolution: Entropy-based Quantification and Contrastive Optimization [67.99082021804145]
We propose an explicit solution to the COO problem, called Detail Enhanced Contrastive Loss (DECLoss)
DECLoss utilizes the clustering property of contrastive learning to directly reduce the variance of the potential high-resolution distribution.
We evaluate DECLoss on multiple super-resolution benchmarks and demonstrate that it improves the perceptual quality of PSNR-oriented models.
arXiv Detail & Related papers (2022-01-04T08:30:09Z) - Blind Image Super-Resolution via Contrastive Representation Learning [41.17072720686262]
We design a contrastive representation learning network that focuses on blind SR of images with multi-modal and spatially variant distributions.
We show that the proposed CRL-SR can handle multi-modal and spatially variant degradation effectively under blind settings.
It also outperforms state-of-the-art SR methods qualitatively and quantitatively.
arXiv Detail & Related papers (2021-07-01T19:34:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.