Fast Neural Architecture Search for Lightweight Dense Prediction
Networks
- URL: http://arxiv.org/abs/2203.01994v2
- Date: Mon, 7 Mar 2022 05:13:24 GMT
- Title: Fast Neural Architecture Search for Lightweight Dense Prediction
Networks
- Authors: Lam Huynh, Esa Rahtu, Jiri Matas, Janne Heikkila
- Abstract summary: We present LDP, a lightweight dense prediction neural architecture search (NAS) framework.
Starting from a pre-defined generic backbone, LDP applies the novel Assisted Tabu Search for efficient architecture exploration.
Experiments show that the proposed framework yields consistent improvements on all tested dense prediction tasks.
- Score: 41.605107921584775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present LDP, a lightweight dense prediction neural architecture search
(NAS) framework. Starting from a pre-defined generic backbone, LDP applies the
novel Assisted Tabu Search for efficient architecture exploration. LDP is fast
and suitable for various dense estimation problems, unlike previous NAS methods
that are either computational demanding or deployed only for a single subtask.
The performance of LPD is evaluated on monocular depth estimation, semantic
segmentation, and image super-resolution tasks on diverse datasets, including
NYU-Depth-v2, KITTI, Cityscapes, COCO-stuff, DIV2K, Set5, Set14, BSD100,
Urban100. Experiments show that the proposed framework yields consistent
improvements on all tested dense prediction tasks, while being $5\%-315\%$ more
compact in terms of the number of model parameters than prior arts.
Related papers
- Toward Edge-Efficient Dense Predictions with Synergistic Multi-Task
Neural Architecture Search [22.62389136288258]
We propose a novel and scalable solution to address the challenges of developing efficient dense predictions on edge platforms.
Our first key insight is that MultiTask Learning (MTL) and hardware-aware Neural Architecture Search (NAS) can work in synergy to greatly benefit on-device Dense Predictions (DP)
We propose JAReD, an improved, easy-to-adopt Joint Absolute-Relative Depth loss, that reduces up to 88% of the undesired noise while simultaneously boosting accuracy.
arXiv Detail & Related papers (2022-10-04T04:49:08Z) - Tiered Pruning for Efficient Differentialble Inference-Aware Neural
Architecture Search [0.0]
We introduce, a bi-path building block for DNAS, which can search over inner hidden dimensions with memory and compute complexity.
Second, we present an algorithm for pruning blocks within a layer of the SuperNet during the search.
Third, we describe a novel technique for pruning unnecessary layers during the search.
arXiv Detail & Related papers (2022-09-23T18:03:54Z) - Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - Lightweight Monocular Depth with a Novel Neural Architecture Search
Method [46.97673710849343]
This paper presents a novel neural architecture search method, called LiDNAS, for generating lightweight monocular depth estimation models.
We construct the search space on a pre-defined backbone network to balance layer diversity and search space size.
The LiDNAS optimized models achieve results superior to compact depth estimation state-of-the-art on NYU-Depth-v2, KITTI, and ScanNet.
arXiv Detail & Related papers (2021-08-25T08:06:28Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Finding Non-Uniform Quantization Schemes using Multi-Task Gaussian
Processes [12.798516310559375]
We show that with significantly lower precision in the last layers we achieve a minimal loss of accuracy with appreciable memory savings.
We test our findings on the CIFAR10 and ImageNet datasets using the VGG, ResNet and GoogLeNet architectures.
arXiv Detail & Related papers (2020-07-15T15:16:18Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.