ShiftNAS: Improving One-shot NAS via Probability Shift
- URL: http://arxiv.org/abs/2307.08300v1
- Date: Mon, 17 Jul 2023 07:53:23 GMT
- Title: ShiftNAS: Improving One-shot NAS via Probability Shift
- Authors: Mingyang Zhang, Xinyi Yu, Haodong Zhao, Linlin Ou
- Abstract summary: We propose ShiftNAS, a method that can adjust the sampling probability based on the complexity of networks.
We evaluate our approach on multiple visual network models, including convolutional neural networks (CNNs) and vision transformers (ViTs)
Experimental results on ImageNet show that ShiftNAS can improve the performance of one-shot NAS without additional consumption.
- Score: 1.3537414663819973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One-shot Neural architecture search (One-shot NAS) has been proposed as a
time-efficient approach to obtain optimal subnet architectures and weights
under different complexity cases by training only once. However, the subnet
performance obtained by weight sharing is often inferior to the performance
achieved by retraining. In this paper, we investigate the performance gap and
attribute it to the use of uniform sampling, which is a common approach in
supernet training. Uniform sampling concentrates training resources on subnets
with intermediate computational resources, which are sampled with high
probability. However, subnets with different complexity regions require
different optimal training strategies for optimal performance. To address the
problem of uniform sampling, we propose ShiftNAS, a method that can adjust the
sampling probability based on the complexity of subnets. We achieve this by
evaluating the performance variation of subnets with different complexity and
designing an architecture generator that can accurately and efficiently provide
subnets with the desired complexity. Both the sampling probability and the
architecture generator can be trained end-to-end in a gradient-based manner.
With ShiftNAS, we can directly obtain the optimal model architecture and
parameters for a given computational complexity. We evaluate our approach on
multiple visual network models, including convolutional neural networks (CNNs)
and vision transformers (ViTs), and demonstrate that ShiftNAS is
model-agnostic. Experimental results on ImageNet show that ShiftNAS can improve
the performance of one-shot NAS without additional consumption. Source codes
are available at https://github.com/bestfleer/ShiftNAS.
Related papers
- Neural Architecture Search using Particle Swarm and Ant Colony
Optimization [0.0]
This paper focuses on training and optimizing CNNs using the Swarm Intelligence (SI) components of OpenNAS.
A system integrating open source tools for Neural Architecture Search (OpenNAS), in the classification of images, has been developed.
arXiv Detail & Related papers (2024-03-06T15:23:26Z) - DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models [56.584561770857306]
We propose a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG.
Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them.
We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS.
When integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset.
arXiv Detail & Related papers (2023-05-26T13:58:18Z) - Pi-NAS: Improving Neural Architecture Search by Reducing Supernet
Training Consistency Shift [128.32670289503025]
Recently proposed neural architecture search (NAS) methods co-train billions of architectures in a supernet and estimate their potential accuracy.
The ranking correlation between the architectures' predicted accuracy and their actual capability is incorrect, which causes the existing NAS methods' dilemma.
We attribute this ranking correlation problem to the supernet training consistency shift, including feature shift and parameter shift.
We address these two shifts simultaneously using a nontrivial supernet-Pi model, called Pi-NAS.
arXiv Detail & Related papers (2021-08-22T09:08:48Z) - Task-Adaptive Neural Network Retrieval with Meta-Contrastive Learning [34.27089256930098]
We propose a novel neural network retrieval method, which retrieves the most optimal pre-trained network for a given task.
We train this framework by meta-learning a cross-modal latent space with contrastive loss, to maximize the similarity between a dataset and a network.
We validate the efficacy of our method on ten real-world datasets, against existing NAS baselines.
arXiv Detail & Related papers (2021-03-02T06:30:51Z) - Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution [127.92235484598811]
This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
arXiv Detail & Related papers (2021-01-17T12:19:49Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Neural Architecture Transfer [20.86857986471351]
Existing approaches require one complete search for each deployment specification of hardware or objective.
We propose Neural Architecture Transfer (NAT) to overcome this limitation.
NAT is designed to efficiently generate task-specific custom models that are competitive under multiple conflicting objectives.
arXiv Detail & Related papers (2020-05-12T15:30:36Z) - MTL-NAS: Task-Agnostic Neural Architecture Search towards
General-Purpose Multi-Task Learning [71.90902837008278]
We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL)
In order to adapt to different task combinations, we disentangle the GP-MTL networks into single-task backbones.
We also propose a novel single-shot gradient-based search algorithm that closes the performance gap between the searched architectures.
arXiv Detail & Related papers (2020-03-31T09:49:14Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.