RATs-NAS: Redirection of Adjacent Trails on GCN for Neural Architecture
Search
- URL: http://arxiv.org/abs/2305.04206v2
- Date: Tue, 9 May 2023 01:12:25 GMT
- Title: RATs-NAS: Redirection of Adjacent Trails on GCN for Neural Architecture
Search
- Authors: Yu-Ming Zhang, Jun-Wei Hsieh, Chun-Chieh Lee, Kuo-Chin Fan
- Abstract summary: We propose the Redirected Adjacent Trails NAS (RATs-NAS) to quickly search for the desired neural network architecture.
RATs-NAS consists of two components: the Redirected Adjacent Trails GCN (RATs-GCN) and the Predictor-based Search Space Sampling (P3S) module.
- Score: 6.117917355232904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Various hand-designed CNN architectures have been developed, such as VGG,
ResNet, DenseNet, etc., and achieve State-of-the-Art (SoTA) levels on different
tasks. Neural Architecture Search (NAS) now focuses on automatically finding
the best CNN architecture to handle the above tasks. However, the verification
of a searched architecture is very time-consuming and makes predictor-based
methods become an essential and important branch of NAS. Two commonly used
techniques to build predictors are graph-convolution networks (GCN) and
multilayer perceptron (MLP). In this paper, we consider the difference between
GCN and MLP on adjacent operation trails and then propose the Redirected
Adjacent Trails NAS (RATs-NAS) to quickly search for the desired neural network
architecture. The RATs-NAS consists of two components: the Redirected Adjacent
Trails GCN (RATs-GCN) and the Predictor-based Search Space Sampling (P3S)
module. RATs-GCN can change trails and their strengths to search for a better
neural network architecture. P3S can rapidly focus on tighter intervals of
FLOPs in the search space. Based on our observations on cell-based NAS, we
believe that architectures with similar FLOPs will perform similarly. Finally,
the RATs-NAS consisting of RATs-GCN and P3S beats WeakNAS, Arch-Graph, and
others by a significant margin on three sub-datasets of NASBench-201.
Related papers
- NASiam: Efficient Representation Learning using Neural Architecture
Search for Siamese Networks [76.8112416450677]
Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL)
NASiam is a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair)
NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image classification datasets while costing only a few GPU hours.
arXiv Detail & Related papers (2023-01-31T19:48:37Z) - PRE-NAS: Predictor-assisted Evolutionary Neural Architecture Search [34.06028035262884]
We propose a novel evolutionary-based NAS strategy, Predictor-assisted E-NAS (PRE-NAS)
PRE-NAS leverages new evolutionary search strategies and integrates high-fidelity weight inheritance over generations.
Experiments on NAS-Bench-201 and DARTS search spaces show that PRE-NAS can outperform state-of-the-art NAS methods.
arXiv Detail & Related papers (2022-04-27T06:40:39Z) - When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search [117.89827740405694]
Key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space.
We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures.
TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37% in four GPU hours in NAS-Bench-201.
arXiv Detail & Related papers (2022-04-11T07:34:21Z) - HyperSegNAS: Bridging One-Shot Neural Architecture Search with 3D
Medical Image Segmentation using HyperNet [51.60655410423093]
We introduce HyperSegNAS to enable one-shot Neural Architecture Search (NAS) for medical image segmentation.
We show that HyperSegNAS yields better performing and more intuitive architectures compared to the previous state-of-the-art (SOTA) segmentation networks.
Our method is evaluated on public datasets from the Medical Decathlon (MSD) challenge, and achieves SOTA performances.
arXiv Detail & Related papers (2021-12-20T16:21:09Z) - NAS-TC: Neural Architecture Search on Temporal Convolutions for Complex
Action Recognition [45.168746142597946]
We propose a new processing framework called Neural Architecture Search- Temporal Convolutional (NAS-TC)
In the first phase, the classical CNN network is used as the backbone network to complete the computationally intensive feature extraction task.
In the second stage, a simple stitching search to the cell is used to complete the relatively lightweight long-range temporal-dependent information extraction.
arXiv Detail & Related papers (2021-03-17T02:02:11Z) - OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection [82.04372532783931]
Recently, neural architecture search (NAS) has been exploited to design feature pyramid networks (FPNs)
We propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
arXiv Detail & Related papers (2021-03-08T01:48:53Z) - Neural Architecture Search on ImageNet in Four GPU Hours: A
Theoretically Inspired Perspective [88.39981851247727]
We propose a novel framework called training-free neural architecture search (TE-NAS)
TE-NAS ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space.
We show that: (1) these two measurements imply the trainability and expressivity of a neural network; (2) they strongly correlate with the network's test accuracy.
arXiv Detail & Related papers (2021-02-23T07:50:44Z) - Revisiting Neural Architecture Search [0.0]
We propose a novel approach to search for the complete neural network without much human effort and is a step closer towards AutoML-nirvana.
Our method starts from a complete graph mapped to a neural network and searches for the connections and operations by balancing the exploration and exploitation of the search space.
arXiv Detail & Related papers (2020-10-12T13:57:30Z) - S3NAS: Fast NPU-aware Neural Architecture Search Methodology [2.607400740040335]
We present a fast NPU-aware NAS methodology, called S3NAS, to find a CNN architecture with higher accuracy than the existing ones.
We are able to find a network in 3 hours using TPUv3, which shows 82.72% top-1 accuracy on ImageNet with 11.66 ms latency.
arXiv Detail & Related papers (2020-09-04T04:45:50Z) - BATS: Binary ArchitecTure Search [56.87581500474093]
We show that directly applying Neural Architecture Search to the binary domain provides very poor results.
Specifically, we introduce and design a novel binary-oriented search space.
We also set a new state-of-the-art for binary neural networks on CIFAR10, CIFAR100 and ImageNet datasets.
arXiv Detail & Related papers (2020-03-03T18:57:02Z) - Fast Neural Network Adaptation via Parameter Remapping and Architecture
Search [35.61441231491448]
Deep neural networks achieve remarkable performance in many computer vision tasks.
Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone.
One major challenge though, is that ImageNet pre-training of the search space representation incurs huge computational cost.
In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network.
arXiv Detail & Related papers (2020-01-08T13:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.