Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution
- URL: http://arxiv.org/abs/2303.09650v2
- Date: Fri, 18 Aug 2023 01:52:34 GMT
- Title: Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution
- Authors: Jiamian Wang, Huan Wang, Yulun Zhang, Yun Fu, Zhiqiang Tao
- Abstract summary: Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
- Score: 91.3781512926942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image super-resolution (SR) has witnessed extensive neural network designs
from CNN to transformer architectures. However, prevailing SR models suffer
from prohibitive memory footprint and intensive computations, which limits
further deployment on edge devices. This work investigates the potential of
network pruning for super-resolution to take advantage of off-the-shelf network
designs and reduce the underlying computational overhead. Two main challenges
remain in applying pruning methods for SR. First, the widely-used filter
pruning technique reflects limited granularity and restricted adaptability to
diverse network structures. Second, existing pruning methods generally operate
upon a pre-trained network for the sparse structure determination, hard to get
rid of dense model training in the traditional SR paradigm. To address these
challenges, we adopt unstructured pruning with sparse models directly trained
from scratch. Specifically, we propose a novel Iterative Soft
Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a
randomly initialized network at each iteration and tweaking unimportant weights
with a small amount proportional to the magnitude scale on-the-fly. We observe
that the proposed ISS-P can dynamically learn sparse structures adapting to the
optimization process and preserve the sparse model's trainability by yielding a
more regularized gradient throughput. Experiments on benchmark datasets
demonstrate the effectiveness of the proposed ISS-P over diverse network
architectures. Code is available at
https://github.com/Jiamian-Wang/Iterative-Soft-Shrinkage-SR
Related papers
- Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - STN: Scalable Tensorizing Networks via Structure-Aware Training and
Adaptive Compression [10.067082377396586]
We propose Scalableizing Networks (STN), which adaptively adjust the model size and decomposition structure without retraining.
STN is compatible with arbitrary network architectures and achieves higher compression performance and flexibility over other tensorizing versions.
arXiv Detail & Related papers (2022-05-30T15:50:48Z) - DeepRLS: A Recurrent Network Architecture with Least Squares Implicit
Layers for Non-blind Image Deconvolution [15.986942312624]
We study the problem of non-blind image deconvolution.
We propose a novel recurrent network architecture that leads to very competitive restoration results of high image quality.
arXiv Detail & Related papers (2021-12-10T13:16:51Z) - OverNet: Lightweight Multi-Scale Super-Resolution with Overscaling
Network [3.6683231417848283]
We introduce OverNet, a deep but lightweight convolutional network to solve SISR at arbitrary scale factors with a single model.
We show that our network outperforms previous state-of-the-art results in standard benchmarks while using fewer parameters than previous approaches.
arXiv Detail & Related papers (2020-08-05T22:10:29Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.