Revisiting RCAN: Improved Training for Image Super-Resolution
- URL: http://arxiv.org/abs/2201.11279v1
- Date: Thu, 27 Jan 2022 02:20:11 GMT
- Title: Revisiting RCAN: Improved Training for Image Super-Resolution
- Authors: Zudi Lin, Prateek Garg, Atmadeep Banerjee, Salma Abdel Magid, Deqing
Sun, Yulun Zhang, Luc Van Gool, Donglai Wei, Hanspeter Pfister
- Abstract summary: We revisit the popular RCAN model and examine the effect of different training options in SR.
We show that RCAN can outperform or match nearly all the CNN-based SR architectures published after RCAN on standard benchmarks.
- Score: 94.8765153437517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image super-resolution (SR) is a fast-moving field with novel architectures
attracting the spotlight. However, most SR models were optimized with dated
training strategies. In this work, we revisit the popular RCAN model and
examine the effect of different training options in SR. Surprisingly (or
perhaps as expected), we show that RCAN can outperform or match nearly all the
CNN-based SR architectures published after RCAN on standard benchmarks with a
proper training strategy and minimal architecture change. Besides, although
RCAN is a very large SR architecture with more than four hundred convolutional
layers, we draw a notable conclusion that underfitting is still the main
problem restricting the model capability instead of overfitting. We observe
supportive evidence that increasing training iterations clearly improves the
model performance while applying regularization techniques generally degrades
the predictions. We denote our simply revised RCAN as RCAN-it and recommend
practitioners to use it as baselines for future research. Code is publicly
available at https://github.com/zudi-lin/rcan-it.
Related papers
- Enhanced Super-Resolution Training via Mimicked Alignment for Real-World Scenes [51.92255321684027]
We propose a novel plug-and-play module designed to mitigate misalignment issues by aligning LR inputs with HR images during training.
Specifically, our approach involves mimicking a novel LR sample that aligns with HR while preserving the characteristics of the original LR samples.
We comprehensively evaluate our method on synthetic and real-world datasets, demonstrating its effectiveness across a spectrum of SR models.
arXiv Detail & Related papers (2024-10-07T18:18:54Z) - Robust Capped lp-Norm Support Vector Ordinal Regression [85.84718111830752]
Ordinal regression is a specialized supervised problem where the labels show an inherent order.
Support Vector Ordinal Regression, as an outstanding ordinal regression model, is widely used in many ordinal regression tasks.
We introduce a new model, Capped $ell_p$-Norm Support Vector Ordinal Regression(CSVOR), that is robust to outliers.
arXiv Detail & Related papers (2024-04-25T13:56:05Z) - Low-Res Leads the Way: Improving Generalization for Super-Resolution by
Self-Supervised Learning [45.13580581290495]
This work introduces a novel "Low-Res Leads the Way" (LWay) training framework to enhance the adaptability of SR models to real-world images.
Our approach utilizes a low-resolution (LR) reconstruction network to extract degradation embeddings from LR images, merging them with super-resolved outputs for LR reconstruction.
Our training regime is universally compatible, requiring no network architecture modifications, making it a practical solution for real-world SR applications.
arXiv Detail & Related papers (2024-03-05T02:29:18Z) - Efficient Test-Time Adaptation for Super-Resolution with Second-Order
Degradation and Reconstruction [62.955327005837475]
Image super-resolution (SR) aims to learn a mapping from low-resolution (LR) to high-resolution (HR) using paired HR-LR training images.
We present an efficient test-time adaptation framework for SR, named SRTTA, which is able to quickly adapt SR models to test domains with different/unknown degradation types.
arXiv Detail & Related papers (2023-10-29T13:58:57Z) - In defense of parameter sharing for model-compression [38.80110838121722]
randomized parameter-sharing (RPS) methods have gained traction for model compression at start of training.
RPS consistently outperforms/matches smaller models and all moderately informed pruning strategies.
This paper argues in favor of paradigm shift towards RPS based models.
arXiv Detail & Related papers (2023-10-17T22:08:01Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - A High-Performance Accelerator for Super-Resolution Processing on
Embedded GPU [24.084304913250826]
We implement a full-stack SR acceleration framework on embedded devices.
The communication and computation bottlenecks in the deep dictionary learning-based SR models are tackled perfectly.
arXiv Detail & Related papers (2023-03-16T00:09:09Z) - RepSR: Training Efficient VGG-style Super-Resolution Networks with
Structural Re-Parameterization and Batch Normalization [30.927648867624498]
This paper explores training efficient VGG-style super-resolution (SR) networks with the structural re- parameterization technique.
Batch normalization (BN) is important to bring training non-linearity and improve the final performance.
In particular, we first train SR networks with mini-batch statistics as usual, and then switch to using population statistics at the later training period.
arXiv Detail & Related papers (2022-05-11T17:55:49Z) - LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single
Image Super-Resolution and Beyond [75.37541439447314]
Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version.
This paper proposes a linearly-assembled pixel-adaptive regression network (LAPAR) to strike a sweet spot of deep model complexity and resulting SISR quality.
arXiv Detail & Related papers (2021-05-21T15:47:18Z) - Neural Network-based Reconstruction in Compressed Sensing MRI Without
Fully-sampled Training Data [17.415937218905125]
CS-MRI has shown promise in reconstructing under-sampled MR images.
Deep learning models have been developed that model the iterative nature of classical techniques by unrolling iterations in a neural network.
In this paper, we explore a novel strategy to train an unrolled reconstruction network in an unsupervised fashion by adopting a loss function widely-used in classical optimization schemes.
arXiv Detail & Related papers (2020-07-29T17:46:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.