DC-NAS: Divide-and-Conquer Neural Architecture Search
- URL: http://arxiv.org/abs/2005.14456v1
- Date: Fri, 29 May 2020 09:02:16 GMT
- Title: DC-NAS: Divide-and-Conquer Neural Architecture Search
- Authors: Yunhe Wang, Yixing Xu, Dacheng Tao
- Abstract summary: We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
- Score: 108.57785531758076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most applications demand high-performance deep neural architectures costing
limited resources. Neural architecture searching is a way of automatically
exploring optimal deep neural networks in a given huge search space. However,
all sub-networks are usually evaluated using the same criterion; that is, early
stopping on a small proportion of the training dataset, which is an inaccurate
and highly complex approach. In contrast to conventional methods, here we
present a divide-and-conquer (DC) approach to effectively and efficiently
search deep neural architectures. Given an arbitrary search space, we first
extract feature representations of all sub-networks according to changes in
parameters or output features of each layer, and then calculate the similarity
between two different sampled networks based on the representations. Then, a
k-means clustering is conducted to aggregate similar architectures into the
same cluster, separately executing sub-network evaluation in each cluster. The
best architecture in each cluster is later merged to obtain the optimal neural
architecture. Experimental results conducted on several benchmarks illustrate
that DC-NAS can overcome the inaccurate evaluation problem, achieving a
$75.1\%$ top-1 accuracy on the ImageNet dataset, which is higher than that of
state-of-the-art methods using the same search space.
Related papers
- Neural Architecture Search using Particle Swarm and Ant Colony
Optimization [0.0]
This paper focuses on training and optimizing CNNs using the Swarm Intelligence (SI) components of OpenNAS.
A system integrating open source tools for Neural Architecture Search (OpenNAS), in the classification of images, has been developed.
arXiv Detail & Related papers (2024-03-06T15:23:26Z) - SimQ-NAS: Simultaneous Quantization Policy and Neural Architecture
Search [6.121126813817338]
Recent one-shot Neural Architecture Search algorithms rely on training a hardware-agnostic super-network tailored to a specific task and then extracting efficient sub-networks for different hardware platforms.
We show that by using multi-objective search algorithms paired with lightly trained predictors, we can efficiently search for both the sub-network architecture and the corresponding quantization policy.
arXiv Detail & Related papers (2023-12-19T22:08:49Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Efficient Search of Multiple Neural Architectures with Different
Complexities via Importance Sampling [3.759936323189417]
This study focuses on the architecture complexity-aware one-shot NAS that optimize the objective function composed of the weighted sum of two metrics.
The proposed method is applied to the architecture search of convolutional neural networks on the CIAFR-10 and ImageNet datasets.
arXiv Detail & Related papers (2022-07-21T07:06:03Z) - Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - Generalizing Few-Shot NAS with Gradient Matching [165.5690495295074]
One-Shot methods train one supernet to approximate the performance of every architecture in the search space via weight-sharing.
Few-Shot NAS reduces the level of weight-sharing by splitting the One-Shot supernet into multiple separated sub-supernets.
It significantly outperforms its Few-Shot counterparts while surpassing previous comparable methods in terms of the accuracy of derived architectures.
arXiv Detail & Related papers (2022-03-29T03:06:16Z) - Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution [127.92235484598811]
This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
arXiv Detail & Related papers (2021-01-17T12:19:49Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - Neural Inheritance Relation Guided One-Shot Layer Assignment Search [44.82474044430184]
We investigate the impact of different layer assignments to the network performance by building an architecture dataset of layer assignment on CIFAR-100.
We find a neural inheritance relation among the networks with different layer assignments, that is, the optimal layer assignments for deeper networks always inherit from those for shallow networks.
Inspired by this neural inheritance relation, we propose an efficient one-shot layer assignment search approach via inherited sampling.
arXiv Detail & Related papers (2020-02-28T07:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.