Towards Efficient Full 8-bit Integer DNN Online Training on
Resource-limited Devices without Batch Normalization
- URL: http://arxiv.org/abs/2105.13890v1
- Date: Thu, 27 May 2021 14:58:04 GMT
- Title: Towards Efficient Full 8-bit Integer DNN Online Training on
Resource-limited Devices without Batch Normalization
- Authors: Yukuan Yang, Xiaowei Chi, Lei Deng, Tianyi Yan, Feng Gao, Guoqi Li
- Abstract summary: Huge computational costs brought by convolution and batch normalization (BN) have caused great challenges for the online training and corresponding applications of deep neural networks (DNNs)
Existing works only focus on the convolution or BN acceleration and no solution can alleviate both problems with satisfactory performance.
Online training has gradually become a trend in resource-limited devices like mobile phones while there is still no complete technical scheme with acceptable model performance, processing speed, and computational cost.
- Score: 13.340254606150232
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Huge computational costs brought by convolution and batch normalization (BN)
have caused great challenges for the online training and corresponding
applications of deep neural networks (DNNs), especially in resource-limited
devices. Existing works only focus on the convolution or BN acceleration and no
solution can alleviate both problems with satisfactory performance. Online
training has gradually become a trend in resource-limited devices like mobile
phones while there is still no complete technical scheme with acceptable model
performance, processing speed, and computational cost. In this research, an
efficient online-training quantization framework termed EOQ is proposed by
combining Fixup initialization and a novel quantization scheme for DNN model
compression and acceleration. Based on the proposed framework, we have
successfully realized full 8-bit integer network training and removed BN in
large-scale DNNs. Especially, weight updates are quantized to 8-bit integers
for the first time. Theoretical analyses of EOQ utilizing Fixup initialization
for removing BN have been further given using a novel Block Dynamical Isometry
theory with weaker assumptions. Benefiting from rational quantization
strategies and the absence of BN, the full 8-bit networks based on EOQ can
achieve state-of-the-art accuracy and immense advantages in computational cost
and processing speed. What is more, the design of deep learning chips can be
profoundly simplified for the absence of unfriendly square root operations in
BN. Beyond this, EOQ has been evidenced to be more advantageous in small-batch
online training with fewer batch samples. In summary, the EOQ framework is
specially designed for reducing the high cost of convolution and BN in network
training, demonstrating a broad application prospect of online training in
resource-limited devices.
Related papers
- Batch Normalization-Free Fully Integer Quantized Neural Networks via Progressive Tandem Learning [16.532309126474843]
Quantised neural networks (QNNs) shrink models and reduce inference energy through low-bit arithmetic.<n>We present a BN-free, fully integer QNN trained via a progressive, layer-wise distillation scheme.<n>On ImageNet with AlexNet, the BN-free model attains competitive Top-1 accuracy under aggressive quantisation.
arXiv Detail & Related papers (2025-12-18T12:47:18Z) - Optimization Proxies using Limited Labeled Data and Training Time -- A Semi-Supervised Bayesian Neural Network Approach [3.26805553822503]
Constrained optimization problems arise in various engineering systems such as inventory management and power grids.<n>Standard deep neural network (DNN) based machine learning proxies are ineffective in practical settings where labeled data is scarce and training times are limited.
arXiv Detail & Related papers (2024-10-04T02:10:20Z) - Constraint Guided Model Quantization of Neural Networks [0.0]
Constraint Guided Model Quantization (CGMQ) is a quantization aware training algorithm that uses an upper bound on the computational resources and reduces the bit-widths of the parameters of the neural network.
It is shown on MNIST that the performance of CGMQ is competitive with state-of-the-art quantization aware training algorithms.
arXiv Detail & Related papers (2024-09-30T09:41:16Z) - AdaQAT: Adaptive Bit-Width Quantization-Aware Training [0.873811641236639]
Large-scale deep neural networks (DNNs) have achieved remarkable success in many application scenarios.
Model quantization is a common approach to deal with deployment constraints, but searching for optimized bit-widths can be challenging.
We present Adaptive Bit-Width Quantization Aware Training (AdaQAT), a learning-based method that automatically optimize bit-widths during training for more efficient inference.
arXiv Detail & Related papers (2024-04-22T09:23:56Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Neural Network Quantization with AI Model Efficiency Toolkit (AIMET) [15.439669159557253]
We present an overview of neural network quantization using AI Model Efficiency Toolkit (AIMET)
AIMET is a library of state-of-the-art quantization and compression algorithms designed to ease the effort required for model optimization.
We provide a practical guide to quantization via AIMET by covering PTQ and QAT, code examples and practical tips.
arXiv Detail & Related papers (2022-01-20T20:35:37Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - A White Paper on Neural Network Quantization [20.542729144379223]
We introduce state-of-the-art algorithms for mitigating the impact of quantization noise on the network's performance.
We consider two main classes of algorithms: Post-Training Quantization (PTQ) and Quantization-Aware-Training (QAT)
arXiv Detail & Related papers (2021-06-15T17:12:42Z) - "BNN - BN = ?": Training Binary Neural Networks without Batch
Normalization [92.23297927690149]
Batch normalization (BN) is a key facilitator and considered essential for state-of-the-art binary neural networks (BNN)
We extend their framework to training BNNs, and for the first time demonstrate that BNs can be completed removed from BNN training and inference regimes.
arXiv Detail & Related papers (2021-04-16T16:46:57Z) - FracTrain: Fractionally Squeezing Bit Savings Both Temporally and
Spatially for Efficient DNN Training [81.85361544720885]
We propose FracTrain that integrates progressive fractional quantization which gradually increases the precision of activations, weights, and gradients.
FracTrain reduces computational cost and hardware-quantified energy/latency of DNN training while achieving a comparable or better (-0.12%+1.87%) accuracy.
arXiv Detail & Related papers (2020-12-24T05:24:10Z) - Efficient Computation Reduction in Bayesian Neural Networks Through
Feature Decomposition and Memorization [10.182119276564643]
In this paper, an efficient BNN inference flow is proposed to reduce the computation cost.
About half of the computations could be eliminated compared to the traditional approach.
We implement our approach in Verilog and synthesise it with 45 $nm$ FreePDK technology.
arXiv Detail & Related papers (2020-05-08T05:03:04Z) - Robust Pruning at Initialization [61.30574156442608]
A growing need for smaller, energy-efficient, neural networks to be able to use machine learning applications on devices with limited computational resources.
For Deep NNs, such procedures remain unsatisfactory as the resulting pruned networks can be difficult to train and, for instance, they do not prevent one layer from being fully pruned.
arXiv Detail & Related papers (2020-02-19T17:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.