Neural Architecture Search as Program Transformation Exploration
- URL: http://arxiv.org/abs/2102.06599v1
- Date: Fri, 12 Feb 2021 16:11:05 GMT
- Title: Neural Architecture Search as Program Transformation Exploration
- Authors: Jack Turner, Elliot J. Crowley, Michael O'Boyle
- Abstract summary: Compilers apply program transformations in order to exploit hardware parallelism and memory hierarchy.
neural architecture search (NAS) techniques mutate networks by operations such as the grouping or bottlenecking of convolutions.
In this work, we express such neural architecture operations as program transformations whose legality depends on a notion of representational capacity.
- Score: 7.090165638014331
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Improving the performance of deep neural networks (DNNs) is important to both
the compiler and neural architecture search (NAS) communities. Compilers apply
program transformations in order to exploit hardware parallelism and memory
hierarchy. However, legality concerns mean they fail to exploit the natural
robustness of neural networks. In contrast, NAS techniques mutate networks by
operations such as the grouping or bottlenecking of convolutions, exploiting
the resilience of DNNs. In this work, we express such neural architecture
operations as program transformations whose legality depends on a notion of
representational capacity. This allows them to be combined with existing
transformations into a unified optimization framework. This unification allows
us to express existing NAS operations as combinations of simpler
transformations. Crucially, it allows us to generate and explore new tensor
convolutions. We prototyped the combined framework in TVM and were able to find
optimizations across different DNNs, that significantly reduce inference time -
over 3$\times$ in the majority of cases.
Furthermore, our scheme dramatically reduces NAS search time. Code is
available
at~\href{https://github.com/jack-willturner/nas-as-program-transformation-exploration}{this
https url}.
Related papers
- FlashRNN: Optimizing Traditional RNNs on Modern Hardware [6.749483762719583]
State-tracking capabilities are important for time-series tasks and logical reasoning.
Traditional RNNs like LSTMs and GRUs do have these capabilities at the cost of strictly sequential processing.
We show how fast these networks can get with our hardware-optimization FlashRNN in Triton and optimizing kernels to the register level.
arXiv Detail & Related papers (2024-12-10T18:50:37Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Memory-Efficient Reversible Spiking Neural Networks [8.05761813203348]
Spiking neural networks (SNNs) are potential competitors to artificial neural networks (ANNs)
SNNs require much more memory than ANNs, which impedes the training of deeper SNN models.
We propose the reversible spiking neural network to reduce the memory cost of intermediate activations and membrane potentials during training.
arXiv Detail & Related papers (2023-12-13T06:39:49Z) - ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less
Neural Networks [30.14665696695582]
ShiftNAS is the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bit-shift neural networks and their real-valued counterparts.
We show that ShiftNAS sets a new state-of-the-art for bit-shift neural networks, where the accuracy increases (1.69-8.07)% on CIFAR10, (5.71-18.09)% on CIFAR100 and (4.36-67.07)% on ImageNet.
arXiv Detail & Related papers (2022-04-07T12:15:03Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - A Spiking Neural Network for Image Segmentation [3.4998703934432682]
We convert the deep Artificial Neural Network (ANN) architecture U-Net to a Spiking Neural Network (SNN) architecture using the Nengo framework.
Both rate-based and spike-based models are trained and optimized for benchmarking performance and power.
The neuromorphic implementation on the Intel Loihi neuromorphic chip is over 2x more energy-efficient than conventional hardware.
arXiv Detail & Related papers (2021-06-16T16:23:18Z) - Container: Context Aggregation Network [83.12004501984043]
Recent finding shows that a simple based solution without any traditional convolutional or Transformer components can produce effective visual representations.
We present the model (CONText Ion NERtwok), a general-purpose building block for multi-head context aggregation.
In contrast to Transformer-based methods that do not scale well to downstream tasks that rely on larger input image resolutions, our efficient network, named modellight, can be employed in object detection and instance segmentation networks.
arXiv Detail & Related papers (2021-06-02T18:09:11Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - Towards Accurate and Compact Architectures via Neural Architecture
Transformer [95.4514639013144]
It is necessary to optimize the operations inside an architecture to improve the performance without introducing extra computational cost.
We have proposed a Neural Architecture Transformer (NAT) method which casts the optimization problem into a Markov Decision Process (MDP)
We propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.
arXiv Detail & Related papers (2021-02-20T09:38:10Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.