Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution
- URL: http://arxiv.org/abs/2101.06658v1
- Date: Sun, 17 Jan 2021 12:19:49 GMT
- Title: Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution
- Authors: Yan Wu, Zhiwu Huang, Suryansh Kumar, Rhea Sanjay Sukthanker, Radu
Timofte, Luc Van Gool
- Abstract summary: This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
- Score: 127.92235484598811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a trilevel neural architecture search (NAS) method for
efficient single image super-resolution (SR). For that, we first define the
discrete search space at three-level, i.e., at network-level, cell-level, and
kernel-level (convolution-kernel). For modeling the discrete search space, we
apply a new continuous relaxation on the discrete search spaces to build a
hierarchical mixture of network-path, cell-operations, and kernel-width. Later
an efficient search algorithm is proposed to perform optimization in a
hierarchical supernet manner that provides a globally optimized and compressed
network via joint convolution kernel width pruning, cell structure search, and
network path optimization. Unlike current NAS methods, we exploit a sorted
sparsestmax activation to let the three-level neural structures contribute
sparsely. Consequently, our NAS optimization progressively converges to those
neural structures with dominant contributions to the supernet. Additionally,
our proposed optimization construction enables a simultaneous search and
training in a single phase, which dramatically reduces search and train time
compared to the traditional NAS algorithms. Experiments on the standard
benchmark datasets demonstrate that our NAS algorithm provides SR models that
are significantly lighter in terms of the number of parameters and FLOPS with
PSNR value comparable to the current state-of-the-art.
Related papers
- TopoNAS: Boosting Search Efficiency of Gradient-based NAS via Topological Simplification [11.08910129925713]
TopoNAS is a model-agnostic approach for gradient-based one-shot NAS.
It significantly reduces searching time and memory usage by topological simplification of searchable paths.
arXiv Detail & Related papers (2024-08-02T15:01:29Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - Tiered Pruning for Efficient Differentialble Inference-Aware Neural
Architecture Search [0.0]
We introduce, a bi-path building block for DNAS, which can search over inner hidden dimensions with memory and compute complexity.
Second, we present an algorithm for pruning blocks within a layer of the SuperNet during the search.
Third, we describe a novel technique for pruning unnecessary layers during the search.
arXiv Detail & Related papers (2022-09-23T18:03:54Z) - OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection [82.04372532783931]
Recently, neural architecture search (NAS) has been exploited to design feature pyramid networks (FPNs)
We propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
arXiv Detail & Related papers (2021-03-08T01:48:53Z) - Neural Architecture Search as Sparse Supernet [78.09905626281046]
This paper aims at enlarging the problem of Neural Architecture Search (NAS) from Single-Path and Multi-Path Search to automated Mixed-Path Search.
We model the NAS problem as a sparse supernet using a new continuous architecture representation with a mixture of sparsity constraints.
The sparse supernet enables us to automatically achieve sparsely-mixed paths upon a compact set of nodes.
arXiv Detail & Related papers (2020-07-31T14:51:52Z) - Hyperparameter Optimization in Neural Networks via Structured Sparse
Recovery [54.60327265077322]
We study two important problems in the automated design of neural networks through the lens of sparse recovery methods.
In the first part of this paper, we establish a novel connection between HPO and structured sparse recovery.
In the second part of this paper, we establish a connection between NAS and structured sparse recovery.
arXiv Detail & Related papers (2020-07-07T00:57:09Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.