ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less
Neural Networks
- URL: http://arxiv.org/abs/2204.05113v1
- Date: Thu, 7 Apr 2022 12:15:03 GMT
- Title: ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less
Neural Networks
- Authors: Xiaoxuan Lou, Guowen Xu, Kangjie Chen, Guanlin Li, Jiwei Li, Tianwei
Zhang
- Abstract summary: ShiftNAS is the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bit-shift neural networks and their real-valued counterparts.
We show that ShiftNAS sets a new state-of-the-art for bit-shift neural networks, where the accuracy increases (1.69-8.07)% on CIFAR10, (5.71-18.09)% on CIFAR100 and (4.36-67.07)% on ImageNet.
- Score: 30.14665696695582
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiplication-less neural networks significantly reduce the time and energy
cost on the hardware platform, as the compute-intensive multiplications are
replaced with lightweight bit-shift operations. However, existing bit-shift
networks are all directly transferred from state-of-the-art convolutional
neural networks (CNNs), which lead to non-negligible accuracy drop or even
failure of model convergence. To combat this, we propose ShiftNAS, the first
framework tailoring Neural Architecture Search (NAS) to substantially reduce
the accuracy gap between bit-shift neural networks and their real-valued
counterparts. Specifically, we pioneer dragging NAS into a shift-oriented
search space and endow it with the robust topology-related search strategy and
custom regularization and stabilization. As a result, our ShiftNAS breaks
through the incompatibility of traditional NAS methods for bit-shift neural
networks and achieves more desirable performance in terms of accuracy and
convergence. Extensive experiments demonstrate that ShiftNAS sets a new
state-of-the-art for bit-shift neural networks, where the accuracy increases
(1.69-8.07)% on CIFAR10, (5.71-18.09)% on CIFAR100 and (4.36-67.07)% on
ImageNet, especially when many conventional CNNs fail to converge on ImageNet
with bit-shift weights.
Related papers
- Twin Network Augmentation: A Novel Training Strategy for Improved Spiking Neural Networks and Efficient Weight Quantization [1.2513527311793347]
Spiking Neural Networks (SNNs) operate using sparse, event-driven spikes to communicate information between neurons.
An alternative technique for reducing a neural network's footprint is quantization.
We present Twin Network Augmentation (TNA), a novel training framework aimed at improving the performance of SNNs.
arXiv Detail & Related papers (2024-09-24T08:20:56Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - A Faster Approach to Spiking Deep Convolutional Neural Networks [0.0]
Spiking neural networks (SNNs) have closer dynamics to the brain than current deep neural networks.
We propose a network structure based on previous work to improve network runtime and accuracy.
arXiv Detail & Related papers (2022-10-31T16:13:15Z) - NAS-PRNet: Neural Architecture Search generated Phase Retrieval Net for
Off-axis Quantitative Phase Imaging [5.943105097884823]
We propose Neural Architecture Search (NAS) generated Phase Retrieval Net (NAS-PRNet)
NAS-PRNet is an encoder-decoder style neural network, automatically found from a large neural network architecture search space.
NAS-PRNet has achieved a Peak Signal-to-Noise Ratio (PSNR) of 36.1 dB, outperforming the widely used U-Net and original SparseMask-generated neural network.
arXiv Detail & Related papers (2022-10-25T16:16:41Z) - Evolutionary Neural Cascade Search across Supernetworks [68.8204255655161]
We introduce ENCAS - Evolutionary Neural Cascade Search.
ENCAS can be used to search over multiple pretrained supernetworks.
We test ENCAS on common computer vision benchmarks.
arXiv Detail & Related papers (2022-03-08T11:06:01Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - A Spiking Neural Network for Image Segmentation [3.4998703934432682]
We convert the deep Artificial Neural Network (ANN) architecture U-Net to a Spiking Neural Network (SNN) architecture using the Nengo framework.
Both rate-based and spike-based models are trained and optimized for benchmarking performance and power.
The neuromorphic implementation on the Intel Loihi neuromorphic chip is over 2x more energy-efficient than conventional hardware.
arXiv Detail & Related papers (2021-06-16T16:23:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - A Light-Weighted Convolutional Neural Network for Bitemporal SAR Image
Change Detection [40.58864817923371]
We propose a lightweight neural network to reduce the computational and spatial complexity.
In the proposed network, we replace normal convolutional layers with bottleneck layers that keep the same number of channels between input and output.
We verify our light-weighted neural network on four sets of bitemporal SAR images.
arXiv Detail & Related papers (2020-05-29T04:01:32Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.