NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis
- URL: http://arxiv.org/abs/2009.13008v3
- Date: Sat, 6 Aug 2022 02:22:25 GMT
- Title: NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis
- Authors: Anjul Tyagi, Cong Xie, Klaus Mueller
- Abstract summary: We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
- Score: 53.106414896248246
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in the area of deep learning have shown the effectiveness
of very large neural networks in several applications. However, as these deep
neural networks continue to grow in size, it becomes more and more difficult to
configure their many parameters to obtain good results. Presently, analysts
must experiment with many different configurations and parameter settings,
which is labor-intensive and time-consuming. On the other hand, the capacity of
fully automated techniques for neural network architecture search is limited
without the domain knowledge of human experts. To deal with the problem, we
formulate the task of neural network architecture optimization as a graph space
exploration, based on the one-shot architecture search technique. In this
approach, a super-graph of all candidate architectures is trained in one-shot
and the optimal neural network is identified as a sub-graph. In this paper, we
present a framework that allows analysts to effectively build the solution
sub-graph space and guide the network search by injecting their domain
knowledge. Starting with the network architecture space composed of basic
neural network components, analysts are empowered to effectively select the
most promising components via our one-shot search scheme. Applying this
technique in an iterative manner allows analysts to converge to the best
performing neural network architecture for a given application. During the
exploration, analysts can use their domain knowledge aided by cues provided
from a scatterplot visualization of the search space to edit different
components and guide the search for faster convergence. We designed our
interface in collaboration with several deep learning researchers and its final
effectiveness is evaluated with a user study and two case studies.
Related papers
- OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - DQNAS: Neural Architecture Search using Reinforcement Learning [6.33280703577189]
Convolutional Neural Networks have been used in a variety of image related applications.
In this paper, we propose an automated Neural Architecture Search framework, guided by the principles of Reinforcement Learning.
arXiv Detail & Related papers (2023-01-17T04:01:47Z) - SuperNet in Neural Architecture Search: A Taxonomic Survey [14.037182039950505]
This survey focuses on the supernet optimization that builds a neural network that assembles all the architectures as its sub models by using weight sharing.
We aim to accomplish that by proposing them as solutions to the common challenges found in the literature: data-side optimization, poor rank correlation alleviation, and transferable NAS for a number of deployment scenarios.
arXiv Detail & Related papers (2022-04-08T08:29:52Z) - Neural Architecture Search for Dense Prediction Tasks in Computer Vision [74.9839082859151]
Deep learning has led to a rising demand for neural network architecture engineering.
neural architecture search (NAS) aims at automatically designing neural network architectures in a data-driven manner rather than manually.
NAS has become applicable to a much wider range of problems in computer vision.
arXiv Detail & Related papers (2022-02-15T08:06:50Z) - M-FasterSeg: An Efficient Semantic Segmentation Network Based on Neural
Architecture Search [0.0]
This paper proposes an improved structure of a semantic segmentation network based on a deep learning network.
First, a neural network search method NAS (Neural Architecture Search) is used to find a semantic segmentation network with multiple resolution branches.
In the search process, combine the self-attention network structure module to adjust the searched neural network structure, and then combine the semantic segmentation network searched by different branches to form a fast semantic segmentation network structure.
arXiv Detail & Related papers (2021-12-15T06:46:55Z) - Improving the sample-efficiency of neural architecture search with
reinforcement learning [0.0]
In this work, we would like to contribute to the area of Automated Machine Learning (AutoML)
Our focus is on one of the most promising research directions, reinforcement learning.
The validation accuracies of the child networks serve as a reward signal for training the controller.
We propose to modify this to a more modern and complex algorithm, PPO, which has demonstrated to be faster and more stable in other environments.
arXiv Detail & Related papers (2021-10-13T14:30:09Z) - Efficient Neural Architecture Search with Performance Prediction [0.0]
We use a neural architecture search to find the best network architecture for the task at hand.
Existing NAS algorithms generally evaluate the fitness of a new architecture by fully training from scratch.
An end-to-end offline performance predictor is proposed to accelerate the evaluation of sampled architectures.
arXiv Detail & Related papers (2021-08-04T05:44:16Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.