Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration
- URL: http://arxiv.org/abs/2012.13212v2
- Date: Mon, 28 Dec 2020 03:28:13 GMT
- Title: Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration
- Authors: Haokui Zhang, Ying Li, Chengrong Gong, Hao Chen, Zongwen Bai, Chunhua
Shen
- Abstract summary: We propose a memory-efficient hierarchical NAS HiNAS (HiNAS) for image denoising and image super-resolution tasks.
With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K.
- Score: 68.6505473346005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, much attention has been spent on neural architecture search (NAS)
approaches, which often outperform manually designed architectures on highlevel
vision tasks. Inspired by this, we attempt to leverage NAS technique to
automatically design efficient network architectures for low-level image
restoration tasks. In this paper, we propose a memory-efficient hierarchical
NAS HiNAS (HiNAS) and apply to two such tasks: image denoising and image
super-resolution. HiNAS adopts gradient based search strategies and builds an
flexible hierarchical search space, including inner search space and outer
search space, which in charge of designing cell architectures and deciding cell
widths, respectively. For inner search space, we propose layerwise architecture
sharing strategy (LWAS), resulting in more flexible architectures and better
performance. For outer search space, we propose cell sharing strategy to save
memory, and considerably accelerate the search speed. The proposed HiNAS is
both memory and computation efficient. With a single GTX1080Ti GPU, it takes
only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours
for searching for the super-resolution structure on DIV2K. Experimental results
show that the architectures found by HiNAS have fewer parameters and enjoy a
faster inference speed, while achieving highly competitive performance compared
with state-of-the-art methods.
Related papers
- Search-time Efficient Device Constraints-Aware Neural Architecture
Search [6.527454079441765]
Deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive.
We automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS)
We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints.
arXiv Detail & Related papers (2023-07-10T09:52:28Z) - L$^{2}$NAS: Learning to Optimize Neural Architectures via
Continuous-Action Reinforcement Learning [23.25155249879658]
Differentiable architecture search (NAS) achieved remarkable results in deep neural network design.
We show that L$2$ achieves state-of-theart results on DART201 benchmark as well as NASS and Once-for-All search policies.
arXiv Detail & Related papers (2021-09-25T19:26:30Z) - Searching Efficient Model-guided Deep Network for Image Denoising [61.65776576769698]
We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T14:03:01Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - BNAS:An Efficient Neural Architecture Search Approach Using Broad
Scalable Architecture [62.587982139871976]
We propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN)
BNAS delivers 0.19 days which is 2.37x less expensive than ENAS who ranks the best in reinforcement learning-based NAS approaches.
arXiv Detail & Related papers (2020-01-18T15:07:55Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.