Searching Efficient Model-guided Deep Network for Image Denoising
- URL: http://arxiv.org/abs/2104.02525v1
- Date: Tue, 6 Apr 2021 14:03:01 GMT
- Title: Searching Efficient Model-guided Deep Network for Image Denoising
- Authors: Qian Ning, Weisheng Dong, Xin Li, Jinjian Wu, Leida Li, Guangming Shi
- Abstract summary: We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
- Score: 61.65776576769698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) has recently reshaped our understanding on
various vision tasks. Similar to the success of NAS in high-level vision tasks,
it is possible to find a memory and computationally efficient solution via NAS
with highly competent denoising performance. However, the optimization gap
between the super-network and the sub-architectures has remained an open issue
in both low-level and high-level vision. In this paper, we present a novel
approach to filling in this gap by connecting model-guided design with NAS
(MoD-NAS) and demonstrate its application into image denoising. Specifically,
we propose to construct a new search space under model-guided framework and
develop more stable and efficient differential search strategies. MoD-NAS
employs a highly reusable width search strategy and a densely connected search
block to automatically select the operations of each layer as well as network
width and depth via gradient descent. During the search process, the proposed
MoG-NAS is capable of avoiding mode collapse due to the smoother search space
designed under the model-guided framework. Experimental results on several
popular datasets show that our MoD-NAS has achieved even better PSNR
performance than current state-of-the-art methods with fewer parameters, lower
number of flops, and less amount of testing time.
Related papers
- Single Cell Training on Architecture Search for Image Denoising [16.72206392993489]
We re-frame the optimal search problem by focusing at component block level.
In addition, we integrate an innovative dimension matching modules for dealing with spatial and channel-wise mismatch.
Our proposed Denoising Prior Neural Architecture Search (DPNAS) was demonstrated by having it complete an optimal architecture search for an image restoration task by just one day with a single GPU.
arXiv Detail & Related papers (2022-12-13T04:47:24Z) - You Only Search Once: On Lightweight Differentiable Architecture Search
for Resource-Constrained Embedded Platforms [10.11289927237036]
Differentiable neural architecture search (NAS) has evolved as the most dominant alternative to automatically design competitive deep neural networks (DNNs)
We introduce a lightweight hardware-aware differentiable NAS framework dubbed LightNAS, striving to find the required architecture through a one-time search.
Extensive experiments are conducted to show the superiority of LightNAS over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-08-30T02:23:23Z) - Lightweight Monocular Depth with a Novel Neural Architecture Search
Method [46.97673710849343]
This paper presents a novel neural architecture search method, called LiDNAS, for generating lightweight monocular depth estimation models.
We construct the search space on a pre-defined backbone network to balance layer diversity and search space size.
The LiDNAS optimized models achieve results superior to compact depth estimation state-of-the-art on NYU-Depth-v2, KITTI, and ScanNet.
arXiv Detail & Related papers (2021-08-25T08:06:28Z) - Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution [127.92235484598811]
This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
arXiv Detail & Related papers (2021-01-17T12:19:49Z) - Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration [68.6505473346005]
We propose a memory-efficient hierarchical NAS HiNAS (HiNAS) for image denoising and image super-resolution tasks.
With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K.
arXiv Detail & Related papers (2020-12-24T12:06:17Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - Neural Architecture Search as Sparse Supernet [78.09905626281046]
This paper aims at enlarging the problem of Neural Architecture Search (NAS) from Single-Path and Multi-Path Search to automated Mixed-Path Search.
We model the NAS problem as a sparse supernet using a new continuous architecture representation with a mixture of sparsity constraints.
The sparse supernet enables us to automatically achieve sparsely-mixed paths upon a compact set of nodes.
arXiv Detail & Related papers (2020-07-31T14:51:52Z) - Bonsai-Net: One-Shot Neural Architecture Search via Differentiable
Pruners [1.4180331276028662]
One-shot Neural Architecture Search (NAS) aims to minimize the computational expense of discovering state-of-the-art models.
We present Bonsai-Net, an efficient one-shot NAS method to explore our relaxed search space.
arXiv Detail & Related papers (2020-06-12T14:44:00Z) - Progressive Automatic Design of Search Space for One-Shot Neural
Architecture Search [15.017964136568061]
It has been observed that a model with higher one-shot model accuracy does not necessarily perform better when stand-alone trained.
We propose Progressive Automatic Design of search space, named PAD-NAS.
In this way, PAD-NAS can automatically design the operations for each layer and achieve a trade-off between search space quality and model diversity.
arXiv Detail & Related papers (2020-05-15T14:21:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.