HR-NAS: Searching Efficient High-Resolution Neural Architectures with
Lightweight Transformers
- URL: http://arxiv.org/abs/2106.06560v1
- Date: Fri, 11 Jun 2021 18:11:36 GMT
- Title: HR-NAS: Searching Efficient High-Resolution Neural Architectures with
Lightweight Transformers
- Authors: Mingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu
Lu, Ping Luo
- Abstract summary: High-resolution representations (HR) are essential for dense prediction tasks such as segmentation, detection, and pose estimation.
This work proposes a novel NAS method, called HR-NAS, which is able to find efficient and accurate networks for different tasks.
HR-NAS is capable of achieving state-of-the-art trade-offs between performance and FLOPs for three dense prediction tasks and an image classification task.
- Score: 48.74623838201632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-resolution representations (HR) are essential for dense prediction tasks
such as segmentation, detection, and pose estimation. Learning HR
representations is typically ignored in previous Neural Architecture Search
(NAS) methods that focus on image classification. This work proposes a novel
NAS method, called HR-NAS, which is able to find efficient and accurate
networks for different tasks, by effectively encoding multiscale contextual
information while maintaining high-resolution representations. In HR-NAS, we
renovate the NAS search space as well as its searching strategy. To better
encode multiscale image contexts in the search space of HR-NAS, we first
carefully design a lightweight transformer, whose computational complexity can
be dynamically changed with respect to different objective functions and
computation budgets. To maintain high-resolution representations of the learned
networks, HR-NAS adopts a multi-branch architecture that provides convolutional
encoding of multiple feature resolutions, inspired by HRNet. Last, we proposed
an efficient fine-grained search strategy to train HR-NAS, which effectively
explores the search space, and finds optimal architectures given various tasks
and computation resources. HR-NAS is capable of achieving state-of-the-art
trade-offs between performance and FLOPs for three dense prediction tasks and
an image classification task, given only small computational budgets. For
example, HR-NAS surpasses SqueezeNAS that is specially designed for semantic
segmentation while improving efficiency by 45.9%. Code is available at
https://github.com/dingmyu/HR-NAS
Related papers
- HyperSegNAS: Bridging One-Shot Neural Architecture Search with 3D
Medical Image Segmentation using HyperNet [51.60655410423093]
We introduce HyperSegNAS to enable one-shot Neural Architecture Search (NAS) for medical image segmentation.
We show that HyperSegNAS yields better performing and more intuitive architectures compared to the previous state-of-the-art (SOTA) segmentation networks.
Our method is evaluated on public datasets from the Medical Decathlon (MSD) challenge, and achieves SOTA performances.
arXiv Detail & Related papers (2021-12-20T16:21:09Z) - FBNetV5: Neural Architecture Search for Multiple Tasks in One Run [28.645664534127516]
We propose FBNetV5, a framework that can search for neural architectures for a variety of vision tasks with much reduced computational cost and human effort.
Specifically, we design 1) a search space that is simple yet inclusive and transferable; 2) a multitask search process that is disentangled with target tasks' training pipeline; and 3) an algorithm to simultaneously search for architectures for multiple tasks with a computational cost agnostic to the number of tasks.
We evaluate the proposed FBNetV5 targeting three fundamental vision tasks -- image classification, object detection, and semantic segmentation.
arXiv Detail & Related papers (2021-11-19T02:07:34Z) - NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search [18.9676056830197]
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks.
We present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-10-12T01:13:18Z) - Generic Neural Architecture Search via Regression [27.78105839644199]
We propose a novel and generic neural architecture search (NAS) framework, termed Generic NAS (GenNAS)
GenNAS does not use task-specific labels but instead adopts textitregression on a set of manually designed synthetic signal bases for architecture evaluation.
We then propose an automatic task search to optimize the combination of synthetic signals using limited downstream-task-specific labels.
arXiv Detail & Related papers (2021-08-04T08:21:12Z) - TransNAS-Bench-101: Improving Transferability and Generalizability of
Cross-Task Neural Architecture Search [98.22779489340869]
We propose TransNAS-Bench-101, a benchmark dataset containing network performance across seven vision tasks.
We explore two fundamentally different types of search space: cell-level search space and macro-level search space.
With 7,352 backbones evaluated on seven tasks, 51,464 trained models with detailed training information are provided.
arXiv Detail & Related papers (2021-05-25T12:15:21Z) - Searching Efficient Model-guided Deep Network for Image Denoising [61.65776576769698]
We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T14:03:01Z) - Efficient Model Performance Estimation via Feature Histories [27.008927077173553]
An important step in the task of neural network design is the evaluation of a model's performance.
In this work, we use the evolution history of features of a network during the early stages of training to build a proxy classifier.
We show that our method can be combined with multiple search algorithms to find better solutions to a wide range of tasks.
arXiv Detail & Related papers (2021-03-07T20:41:57Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - MS-NAS: Multi-Scale Neural Architecture Search for Medical Image
Segmentation [16.206524842952636]
This paper presents a Multi-Scale NAS framework that is featured with multi-scale search space from network backbone to cell operation.
On various datasets for segmentation, MS-NAS outperforms the state-of-the-art methods and achieves 0.6-5.4% mIOU and 0.4-3.5% DSC improvements.
arXiv Detail & Related papers (2020-07-13T02:02:00Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.