RepSR: Training Efficient VGG-style Super-Resolution Networks with
Structural Re-Parameterization and Batch Normalization
- URL: http://arxiv.org/abs/2205.05671v1
- Date: Wed, 11 May 2022 17:55:49 GMT
- Title: RepSR: Training Efficient VGG-style Super-Resolution Networks with
Structural Re-Parameterization and Batch Normalization
- Authors: Xintao Wang, Chao Dong, Ying Shan
- Abstract summary: This paper explores training efficient VGG-style super-resolution (SR) networks with the structural re- parameterization technique.
Batch normalization (BN) is important to bring training non-linearity and improve the final performance.
In particular, we first train SR networks with mini-batch statistics as usual, and then switch to using population statistics at the later training period.
- Score: 30.927648867624498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores training efficient VGG-style super-resolution (SR)
networks with the structural re-parameterization technique. The general
pipeline of re-parameterization is to train networks with multi-branch topology
first, and then merge them into standard 3x3 convolutions for efficient
inference. In this work, we revisit those primary designs and investigate
essential components for re-parameterizing SR networks. First of all, we find
that batch normalization (BN) is important to bring training non-linearity and
improve the final performance. However, BN is typically ignored in SR, as it
usually degrades the performance and introduces unpleasant artifacts. We
carefully analyze the cause of BN issue and then propose a straightforward yet
effective solution. In particular, we first train SR networks with mini-batch
statistics as usual, and then switch to using population statistics at the
later training period. While we have successfully re-introduced BN into SR, we
further design a new re-parameterizable block tailored for SR, namely RepSR. It
consists of a clean residual path and two expand-and-squeeze convolution paths
with the modified BN. Extensive experiments demonstrate that our simple RepSR
is capable of achieving superior performance to previous SR re-parameterization
methods among different model sizes. In addition, our RepSR can achieve a
better trade-off between performance and actual running time (throughput) than
previous SR methods. Codes will be available at
https://github.com/TencentARC/RepSR.
Related papers
- Learning effective pruning at initialization from iterative pruning [15.842658282636876]
We present an end-to-end neural network-based PaI method to reduce training costs.
Our approach outperforms existing methods in high-sparsity settings.
As the first neural network-based PaI method, we conduct extensive experiments to validate the factors influencing this approach.
arXiv Detail & Related papers (2024-08-27T03:17:52Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - UniPTS: A Unified Framework for Proficient Post-Training Sparsity [67.16547529992928]
Post-training Sparsity (PTS) is a newly emerged avenue that chases efficient network sparsity with limited data in need.
In this paper, we attempt to reconcile this disparity by transposing three cardinal factors that profoundly alter the performance of conventional sparsity into the context of PTS.
Our framework, termed UniPTS, is validated to be much superior to existing PTS methods across extensive benchmarks.
arXiv Detail & Related papers (2024-05-29T06:53:18Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - Trainability Preserving Neural Structured Pruning [64.65659982877891]
We present trainability preserving pruning (TPP), a regularization-based structured pruning method that can effectively maintain trainability during sparsification.
TPP can compete with the ground-truth dynamical isometry recovery method on linear networks.
It delivers encouraging performance in comparison to many top-performing filter pruning methods.
arXiv Detail & Related papers (2022-07-25T21:15:47Z) - Residual Local Feature Network for Efficient Super-Resolution [20.62809970985125]
In this work, we propose a novel Residual Local Feature Network (RLFN)
The main idea is using three convolutional layers for residual local feature learning to simplify feature aggregation.
In addition, we won the first place in the runtime track of the NTIRE 2022 efficient super-resolution challenge.
arXiv Detail & Related papers (2022-05-16T08:46:34Z) - Boosting Pruned Networks with Linear Over-parameterization [8.796518772724955]
Structured pruning compresses neural networks by reducing channels (filters) for fast inference and low footprint at run-time.
To restore accuracy after pruning, fine-tuning is usually applied to pruned networks.
We propose a novel method that first linearly over- parameterizes the compact layers in pruned networks to enlarge the number of fine-tuning parameters.
arXiv Detail & Related papers (2022-04-25T05:30:26Z) - Using UNet and PSPNet to explore the reusability principle of CNN
parameters [5.623232537411766]
Reusability of parameters in each layer of a deep convolutional neural network is experimentally quantified.
Running mean and running variance plays an important role than Weight and Bias in BN layer.
The bias in Convolution layers are not sensitive, and it can be reused directly.
arXiv Detail & Related papers (2020-08-08T01:51:08Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.