Lightweight Image Super-Resolution with Hierarchical and Differentiable
Neural Architecture Search
- URL: http://arxiv.org/abs/2105.03939v1
- Date: Sun, 9 May 2021 13:30:16 GMT
- Title: Lightweight Image Super-Resolution with Hierarchical and Differentiable
Neural Architecture Search
- Authors: Han Huang, Li Shen, Chaoyang He, Weisheng Dong, Haozhi Huang,
Guangming Shi
- Abstract summary: Single Image Super-Resolution (SISR) tasks have achieved significant performance with deep neural networks.
We propose a novel differentiable Neural Architecture Search (NAS) approach on both the cell-level and network-level to search for lightweight SISR models.
- Score: 38.83764580480486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Single Image Super-Resolution (SISR) tasks have achieved significant
performance with deep neural networks. However, the large number of parameters
in CNN-based methods for SISR tasks require heavy computations. Although
several efficient SISR models have been recently proposed, most are handcrafted
and thus lack flexibility. In this work, we propose a novel differentiable
Neural Architecture Search (NAS) approach on both the cell-level and
network-level to search for lightweight SISR models. Specifically, the
cell-level search space is designed based on an information distillation
mechanism, focusing on the combinations of lightweight operations and aiming to
build a more lightweight and accurate SR structure. The network-level search
space is designed to consider the feature connections among the cells and aims
to find which information flow benefits the cell most to boost the performance.
Unlike the existing Reinforcement Learning (RL) or Evolutionary Algorithm (EA)
based NAS methods for SISR tasks, our search pipeline is fully differentiable,
and the lightweight SISR models can be efficiently searched on both the
cell-level and network-level jointly on a single GPU. Experiments show that our
methods can achieve state-of-the-art performance on the benchmark datasets in
terms of PSNR, SSIM, and model complexity with merely 68G Multi-Adds for
$\times 2$ and 18G Multi-Adds for $\times 4$ SR tasks. Code will be available
at \url{https://github.com/DawnHH/DLSR-PyTorch}.
Related papers
- Multimodal Learned Sparse Retrieval with Probabilistic Expansion Control [66.78146440275093]
Learned retrieval (LSR) is a family of neural methods that encode queries and documents into sparse lexical vectors.
We explore the application of LSR to the multi-modal domain, with a focus on text-image retrieval.
Current approaches like LexLIP and STAIR require complex multi-step training on massive datasets.
Our proposed approach efficiently transforms dense vectors from a frozen dense model into sparse lexical vectors.
arXiv Detail & Related papers (2024-02-27T14:21:56Z) - Lightweight Stepless Super-Resolution of Remote Sensing Images via
Saliency-Aware Dynamic Routing Strategy [15.587621728422414]
Deep learning algorithms have greatly improved the performance of remote sensing image (RSI) super-resolution (SR)
However, increasing network depth and parameters cause a huge burden of computing and storage.
We propose a saliency-aware dynamic routing network (SalDRN) for lightweight and stepless SR of RSIs.
arXiv Detail & Related papers (2022-10-14T07:49:03Z) - A Dynamic Residual Self-Attention Network for Lightweight Single Image
Super-Resolution [17.094665593472214]
We propose a dynamic residual self-attention network (DRSAN) for lightweight single-image super-resolution (SISR)
DRSAN has dynamic residual connections based on dynamic residual attention (DRA), which adaptively changes its structure according to input statistics.
We also propose a residual self-attention (RSA) module to further boost the performance, which produces 3-dimensional attention maps without additional parameters.
arXiv Detail & Related papers (2021-12-08T06:41:21Z) - MPRNet: Multi-Path Residual Network for Lightweight Image Super
Resolution [2.3576437999036473]
A novel lightweight super resolution network is proposed, which improves the SOTA performance in lightweight SR.
The proposed architecture also contains a new attention mechanism, Two-Fold Attention Module, to maximize the representation ability of the model.
arXiv Detail & Related papers (2020-11-09T17:11:15Z) - Accurate and Lightweight Image Super-Resolution with Model-Guided Deep
Unfolding Network [63.69237156340457]
We present and advocate an explainable approach toward SISR named model-guided deep unfolding network (MoG-DUN)
MoG-DUN is accurate (producing fewer aliasing artifacts), computationally efficient (with reduced model parameters), and versatile (capable of handling multiple degradations)
The superiority of the proposed MoG-DUN method to existing state-of-theart image methods including RCAN, SRDNF, and SRFBN is substantiated by extensive experiments on several popular datasets and various degradation scenarios.
arXiv Detail & Related papers (2020-09-14T08:23:37Z) - Real Image Super Resolution Via Heterogeneous Model Ensemble using
GP-NAS [63.48801313087118]
We propose a new method for image superresolution using deep residual network with dense skip connections.
The proposed method won the first place in all three tracks of the AIM 2020 Real Image Super-Resolution Challenge.
arXiv Detail & Related papers (2020-09-02T22:33:23Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Hierarchical Neural Architecture Search for Single Image
Super-Resolution [18.624661846174412]
Deep neural networks have exhibited promising performance in image super-resolution (SR)
Most SR models follow a hierarchical architecture that contains both the cell-level design of computational blocks and the network-level design of the positions of upsampling blocks.
We propose a Hierarchical Neural Architecture Search (HNAS) method to automatically design promising architectures with different requirements of computation cost.
arXiv Detail & Related papers (2020-03-10T10:19:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.