AdderNet and its Minimalist Hardware Design for Energy-Efficient
Artificial Intelligence
- URL: http://arxiv.org/abs/2101.10015v2
- Date: Wed, 3 Feb 2021 06:48:54 GMT
- Title: AdderNet and its Minimalist Hardware Design for Energy-Efficient
Artificial Intelligence
- Authors: Yunhe Wang, Mingqiang Huang, Kai Han, Hanting Chen, Wei Zhang,
Chunjing Xu, Dacheng Tao
- Abstract summary: We present a novel minimalist hardware architecture using adder convolutional neural network (AdderNet)
The whole AdderNet can practically achieve 16% enhancement in speed.
We conclude the AdderNet is able to surpass all the other competitors.
- Score: 111.09105910265154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNN) have been widely used for boosting the
performance of many machine intelligence tasks. However, the CNN models are
usually computationally intensive and energy consuming, since they are often
designed with numerous multiply-operations and considerable parameters for the
accuracy reason. Thus, it is difficult to directly apply them in the
resource-constrained environments such as 'Internet of Things' (IoT) devices
and smart phones. To reduce the computational complexity and energy burden,
here we present a novel minimalist hardware architecture using adder
convolutional neural network (AdderNet), in which the original convolution is
replaced by adder kernel using only additions. To maximally excavate the
potential energy consumption, we explore the low-bit quantization algorithm for
AdderNet with shared-scaling-factor method, and we design both specific and
general-purpose hardware accelerators for AdderNet. Experimental results show
that the adder kernel with int8/int16 quantization also exhibits high
performance, meanwhile consuming much less resources (theoretically ~81% off).
In addition, we deploy the quantized AdderNet on FPGA (Field Programmable Gate
Array) platform. The whole AdderNet can practically achieve 16% enhancement in
speed, 67.6%-71.4% decrease in logic resource utilization and 47.85%-77.9%
decrease in power consumption compared to CNN under the same circuit
architecture. With a comprehensive comparison on the performance, power
consumption, hardware resource consumption and network generalization
capability, we conclude the AdderNet is able to surpass all the other
competitors including the classical CNN, novel memristor-network, XNOR-Net and
the shift-kernel based network, indicating its great potential in future high
performance and energy-efficient artificial intelligence applications.
Related papers
- Energy Efficient Hardware Acceleration of Neural Networks with
Power-of-Two Quantisation [0.0]
We show that a hardware neural network accelerator with PoT weights implemented on the Zynq UltraScale + MPSoC ZCU104 FPGA can be at least $1.4x$ more energy efficient than the uniform quantisation version.
arXiv Detail & Related papers (2022-09-30T06:33:40Z) - ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient
Neural Networks [42.28659737268829]
ShiftAddNAS can automatically search for more accurate and more efficient NNs.
ShiftAddNAS integrates the first hybrid search space that incorporates both multiplication-based and multiplication-free operators.
Experiments and ablation studies consistently validate the efficacy of ShiftAddNAS.
arXiv Detail & Related papers (2022-05-17T06:40:13Z) - Weightless Neural Networks for Efficient Edge Inference [1.7882696915798877]
Weightless Neural Networks (WNNs) are a class of machine learning model which use table lookups to perform inference.
We propose a novel WNN architecture, BTHOWeN, with key algorithmic and architectural improvements over prior work.
BTHOWeN targets the large and growing edge computing sector by providing superior latency and energy efficiency.
arXiv Detail & Related papers (2022-03-03T01:46:05Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - An Empirical Study of Adder Neural Networks for Object Detection [67.64041181937624]
Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations.
We present an empirical study of AdderNets for object detection.
arXiv Detail & Related papers (2021-12-27T11:03:13Z) - Adder Neural Networks [75.54239599016535]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks.
In AdderNets, we take the $ell_p$-norm distance between filters and input feature as the output response.
We show that the proposed AdderNets can achieve 75.7% Top-1 accuracy 92.3% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2021-05-29T04:02:51Z) - ShiftAddNet: A Hardware-Inspired Deep Network [87.18216601210763]
ShiftAddNet is an energy-efficient multiplication-less deep neural network.
It leads to both energy-efficient inference and training, without compromising expressive capacity.
ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies.
arXiv Detail & Related papers (2020-10-24T05:09:14Z) - AdderNet: Do We Really Need Multiplications in Deep Learning? [159.174891462064]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks for much cheaper additions to reduce computation costs.
We develop a special back-propagation approach for AdderNets by investigating the full-precision gradient.
As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2019-12-31T06:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.