Iterative Token Evaluation and Refinement for Real-World
Super-Resolution
- URL: http://arxiv.org/abs/2312.05616v1
- Date: Sat, 9 Dec 2023 17:07:32 GMT
- Title: Iterative Token Evaluation and Refinement for Real-World
Super-Resolution
- Authors: Chaofeng Chen, Shangchen Zhou, Liang Liao, Haoning Wu, Wenxiu Sun,
Qiong Yan, Weisi Lin
- Abstract summary: Real-world image super-resolution (RWSR) is a long-standing problem as low-quality (LQ) images often have complex and unidentified degradations.
We propose an Iterative Token Evaluation and Refinement framework for RWSR.
We show that ITER is easier to train than Generative Adversarial Networks (GANs) and more efficient than continuous diffusion models.
- Score: 77.74289677520508
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Real-world image super-resolution (RWSR) is a long-standing problem as
low-quality (LQ) images often have complex and unidentified degradations.
Existing methods such as Generative Adversarial Networks (GANs) or continuous
diffusion models present their own issues including GANs being difficult to
train while continuous diffusion models requiring numerous inference steps. In
this paper, we propose an Iterative Token Evaluation and Refinement (ITER)
framework for RWSR, which utilizes a discrete diffusion model operating in the
discrete token representation space, i.e., indexes of features extracted from a
VQGAN codebook pre-trained with high-quality (HQ) images. We show that ITER is
easier to train than GANs and more efficient than continuous diffusion models.
Specifically, we divide RWSR into two sub-tasks, i.e., distortion removal and
texture generation. Distortion removal involves simple HQ token prediction with
LQ images, while texture generation uses a discrete diffusion model to
iteratively refine the distortion removal output with a token refinement
network. In particular, we propose to include a token evaluation network in the
discrete diffusion process. It learns to evaluate which tokens are good
restorations and helps to improve the iterative refinement results. Moreover,
the evaluation network can first check status of the distortion removal output
and then adaptively select total refinement steps needed, thereby maintaining a
good balance between distortion removal and texture generation. Extensive
experimental results show that ITER is easy to train and performs well within
just 8 iterative steps. Our codes will be available publicly.
Related papers
- Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think [72.48325960659822]
One main bottleneck in training large-scale diffusion models for generation lies in effectively learning these representations.
We study this by introducing a straightforward regularization called REPresentation Alignment (REPA), which aligns the projections of noisy input hidden states in denoising networks with clean image representations obtained from external, pretrained visual encoders.
The results are striking: our simple strategy yields significant improvements in both training efficiency and generation quality when applied to popular diffusion and flow-based transformers, such as DiTs and SiTs.
arXiv Detail & Related papers (2024-10-09T14:34:53Z) - One-Step Effective Diffusion Network for Real-World Image Super-Resolution [11.326598938246558]
We propose a one-step effective diffusion network, namely OSEDiff, for the Real-ISR problem.
We finetune the pre-trained diffusion network with trainable layers to adapt it to complex image degradations.
Our OSEDiff model can efficiently and effectively generate HQ images in just one diffusion step.
arXiv Detail & Related papers (2024-06-12T13:10:31Z) - SinSR: Diffusion-Based Image Super-Resolution in a Single Step [119.18813219518042]
Super-resolution (SR) methods based on diffusion models exhibit promising results.
But their practical application is hindered by the substantial number of required inference steps.
We propose a simple yet effective method for achieving single-step SR generation, named SinSR.
arXiv Detail & Related papers (2023-11-23T16:21:29Z) - Deep Equilibrium Diffusion Restoration with Parallel Sampling [120.15039525209106]
Diffusion model-based image restoration (IR) aims to use diffusion models to recover high-quality (HQ) images from degraded images, achieving promising performance.
Most existing methods need long serial sampling chains to restore HQ images step-by-step, resulting in expensive sampling time and high computation costs.
In this work, we aim to rethink the diffusion model-based IR models through a different perspective, i.e., a deep equilibrium (DEQ) fixed point system, called DeqIR.
arXiv Detail & Related papers (2023-11-20T08:27:56Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Frequency Compensated Diffusion Model for Real-scene Dehazing [6.105813272271171]
We consider a dehazing framework based on conditional diffusion models for improved generalization to real haze.
The proposed dehazing diffusion model significantly outperforms state-of-the-art methods on real-world images.
arXiv Detail & Related papers (2023-08-21T06:50:44Z) - Learning A Coarse-to-Fine Diffusion Transformer for Image Restoration [39.071637725773314]
We propose a coarse-to-fine diffusion Transformer (C2F-DFT) for image restoration.
C2F-DFT contains diffusion self-attention (DFSA) and diffusion feed-forward network (DFN)
In the coarse training stage, our C2F-DFT estimates noises and then generates the final clean image by a sampling algorithm.
arXiv Detail & Related papers (2023-08-17T01:59:59Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Global Context with Discrete Diffusion in Vector Quantised Modelling for
Image Generation [19.156223720614186]
The integration of Vector Quantised Variational AutoEncoder with autoregressive models as generation part has yielded high-quality results on image generation.
We show that with the help of a content-rich discrete visual codebook from VQ-VAE, the discrete diffusion model can also generate high fidelity images with global context.
arXiv Detail & Related papers (2021-12-03T09:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.