High-Resolution Be Aware! Improving the Self-Supervised Real-World Super-Resolution
- URL: http://arxiv.org/abs/2411.16175v1
- Date: Mon, 25 Nov 2024 08:13:32 GMT
- Title: High-Resolution Be Aware! Improving the Self-Supervised Real-World Super-Resolution
- Authors: Yuehan Zhang, Angela Yao,
- Abstract summary: Self-supervised learning is crucial for super-resolution because ground-truth images are usually unavailable for real-world settings.
Existing methods derive self-supervision from low-resolution images by creating pseudo-pairs or by enforcing a low-resolution reconstruction objective.
This paper strengthens awareness of the high-resolution image to improve the self-supervised real-world super-resolution.
- Score: 37.546746047196486
- License:
- Abstract: Self-supervised learning is crucial for super-resolution because ground-truth images are usually unavailable for real-world settings. Existing methods derive self-supervision from low-resolution images by creating pseudo-pairs or by enforcing a low-resolution reconstruction objective. These methods struggle with insufficient modeling of real-world degradations and the lack of knowledge about high-resolution imagery, resulting in unnatural super-resolved results. This paper strengthens awareness of the high-resolution image to improve the self-supervised real-world super-resolution. We propose a controller to adjust the degradation modeling based on the quality of super-resolution results. We also introduce a novel feature-alignment regularizer that directly constrains the distribution of super-resolved images. Our method finetunes the off-the-shelf SR models for a target real-world domain. Experiments show that it produces natural super-resolved images with state-of-the-art perceptual performance.
Related papers
- ResMaster: Mastering High-Resolution Image Generation via Structural and Fine-Grained Guidance [46.64836025290448]
ResMaster is a training-free method that empowers resolution-limited diffusion models to generate high-quality images beyond resolution restrictions.
It provides structural and fine-grained guidance for crafting high-resolution images on a patch-by-patch basis.
Experiments validate that ResMaster sets a new benchmark for high-resolution image generation and demonstrates promising efficiency.
arXiv Detail & Related papers (2024-06-24T09:28:21Z) - DeeDSR: Towards Real-World Image Super-Resolution via Degradation-Aware Stable Diffusion [27.52552274944687]
We introduce a novel two-stage, degradation-aware framework that enhances the diffusion model's ability to recognize content and degradation in low-resolution images.
In the first stage, we employ unsupervised contrastive learning to obtain representations of image degradations.
In the second stage, we integrate a degradation-aware module into a simplified ControlNet, enabling flexible adaptation to various degradations.
arXiv Detail & Related papers (2024-03-31T12:07:04Z) - Learning Dual-Level Deformable Implicit Representation for Real-World Scale Arbitrary Super-Resolution [81.74583887661794]
We build a new real-world super-resolution benchmark with both integer and non-integer scaling factors.
We propose a Dual-level Deformable Implicit Representation (DDIR) to solve real-world scale arbitrary super-resolution.
Our trained model achieves state-of-the-art performance on the RealArbiSR and RealSR benchmarks for real-world scale arbitrary super-resolution.
arXiv Detail & Related papers (2024-03-16T13:44:42Z) - Implicit Diffusion Models for Continuous Super-Resolution [65.45848137914592]
This paper introduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution.
IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework.
The scaling factor regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output.
arXiv Detail & Related papers (2023-03-29T07:02:20Z) - How Real is Real: Evaluating the Robustness of Real-World Super
Resolution [0.0]
Super-resolution is a well-known problem as most methods rely on the downsampling method performed on the high-resolution image to form the low-resolution image to be known.
We will evaluate multiple state-of-the-art super-resolution methods and gauge their performance when presented with various types of real-life images.
We will present a potential solution to alleviate the generalization problem which is imminent in most state-of-the-art super-resolution models.
arXiv Detail & Related papers (2022-10-22T18:53:45Z) - A Generative Model for Hallucinating Diverse Versions of Super
Resolution Images [0.3222802562733786]
We are tackling in this work the problem of obtaining different high-resolution versions from the same low-resolution image using Generative Adversarial Models.
Our learning approach makes use of high frequencies available in the training high-resolution images for preserving and exploring in an unsupervised manner.
arXiv Detail & Related papers (2021-02-12T17:11:42Z) - Unsupervised Real Image Super-Resolution via Generative Variational
AutoEncoder [47.53609520395504]
We revisit the classic example based image super-resolution approaches and come up with a novel generative model for perceptual image super-resolution.
We propose a joint image denoising and super-resolution model via Variational AutoEncoder.
With the aid of the discriminator, an additional overhead of super-resolution subnetwork is attached to super-resolve the denoised image with photo-realistic visual quality.
arXiv Detail & Related papers (2020-04-27T13:49:36Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.