TransNAS-Bench-101: Improving Transferability and Generalizability of
Cross-Task Neural Architecture Search
- URL: http://arxiv.org/abs/2105.11871v1
- Date: Tue, 25 May 2021 12:15:21 GMT
- Title: TransNAS-Bench-101: Improving Transferability and Generalizability of
Cross-Task Neural Architecture Search
- Authors: Yawen Duan, Xin Chen, Hang Xu, Zewei Chen, Xiaodan Liang, Tong Zhang,
Zhenguo Li
- Abstract summary: We propose TransNAS-Bench-101, a benchmark dataset containing network performance across seven vision tasks.
We explore two fundamentally different types of search space: cell-level search space and macro-level search space.
With 7,352 backbones evaluated on seven tasks, 51,464 trained models with detailed training information are provided.
- Score: 98.22779489340869
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent breakthroughs of Neural Architecture Search (NAS) extend the field's
research scope towards a broader range of vision tasks and more diversified
search spaces. While existing NAS methods mostly design architectures on a
single task, algorithms that look beyond single-task search are surging to
pursue a more efficient and universal solution across various tasks. Many of
them leverage transfer learning and seek to preserve, reuse, and refine network
design knowledge to achieve higher efficiency in future tasks. However, the
enormous computational cost and experiment complexity of cross-task NAS are
imposing barriers for valuable research in this direction. Existing NAS
benchmarks all focus on one type of vision task, i.e., classification. In this
work, we propose TransNAS-Bench-101, a benchmark dataset containing network
performance across seven tasks, covering classification, regression,
pixel-level prediction, and self-supervised tasks. This diversity provides
opportunities to transfer NAS methods among tasks and allows for more complex
transfer schemes to evolve. We explore two fundamentally different types of
search space: cell-level search space and macro-level search space. With 7,352
backbones evaluated on seven tasks, 51,464 trained models with detailed
training information are provided. With TransNAS-Bench-101, we hope to
encourage the advent of exceptional NAS algorithms that raise cross-task search
efficiency and generalizability to the next level. Our dataset file will be
available at Mindspore, VEGA.
Related papers
- einspace: Searching for Neural Architectures from Fundamental Operations [28.346238250052455]
We introduce einspace, a search space based on a parameterised probabilistic context-free grammar.
We show that competitive architectures can be obtained by searching from scratch, and we consistently find large improvements when initialising the search with strong baselines.
arXiv Detail & Related papers (2024-05-31T14:25:45Z) - How Much Is Hidden in the NAS Benchmarks? Few-Shot Adaptation of a NAS
Predictor [22.87207410692821]
We borrow from the rich field of meta-learning for few-shot adaptation and study applicability of those methods to NAS.
Our meta-learning approach not only shows superior (or matching) performance in the cross-validation experiments but also successful extrapolation to a new search space and tasks.
arXiv Detail & Related papers (2023-11-30T10:51:46Z) - Efficient Architecture Search for Diverse Tasks [29.83517145790238]
We study neural architecture search (NAS) for efficiently solving diverse problems.
We introduce DASH, a differentiable NAS algorithm that computes the mixture-of-operations using the Fourier diagonalization of convolution.
We evaluate DASH-Bench-360, a suite of ten tasks designed for NAS benchmarking in diverse domains.
arXiv Detail & Related papers (2022-04-15T17:21:27Z) - NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search [18.9676056830197]
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks.
We present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-10-12T01:13:18Z) - Understanding and Accelerating Neural Architecture Search with
Training-Free and Theory-Grounded Metrics [117.4281417428145]
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS)
NAS has been explosively studied to automate the discovery of top-performer neural networks, but suffers from heavy resource consumption and often incurs search bias due to truncated training or approximations.
We present a unified framework to understand and accelerate NAS, by disentangling "TEG" characteristics of searched networks.
arXiv Detail & Related papers (2021-08-26T17:52:07Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z) - MTL-NAS: Task-Agnostic Neural Architecture Search towards
General-Purpose Multi-Task Learning [71.90902837008278]
We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL)
In order to adapt to different task combinations, we disentangle the GP-MTL networks into single-task backbones.
We also propose a novel single-shot gradient-based search algorithm that closes the performance gap between the searched architectures.
arXiv Detail & Related papers (2020-03-31T09:49:14Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z) - Scalable NAS with Factorizable Architectural Parameters [102.51428615447703]
Neural Architecture Search (NAS) is an emerging topic in machine learning and computer vision.
This paper presents a scalable algorithm by factorizing a large set of candidate operators into smaller subspaces.
With a small increase in search costs and no extra costs in re-training, we find interesting architectures that were not explored before.
arXiv Detail & Related papers (2019-12-31T10:26:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.