Towards Cheaper Inference in Deep Networks with Lower Bit-Width
Accumulators
- URL: http://arxiv.org/abs/2401.14110v1
- Date: Thu, 25 Jan 2024 11:46:01 GMT
- Title: Towards Cheaper Inference in Deep Networks with Lower Bit-Width
Accumulators
- Authors: Yaniv Blumenfeld, Itay Hubara, Daniel Soudry
- Abstract summary: Current hardware still relies on high-accuracy core operations.
This is because, so far, the usage of low-precision accumulators led to a significant degradation in performance.
We present a simple method to train and fine-tune high-end DNNs, to allow, for the first time, utilization of cheaper, $12$-bits accumulators.
- Score: 25.100092698906437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The majority of the research on the quantization of Deep Neural Networks
(DNNs) is focused on reducing the precision of tensors visible by high-level
frameworks (e.g., weights, activations, and gradients). However, current
hardware still relies on high-accuracy core operations. Most significant is the
operation of accumulating products. This high-precision accumulation operation
is gradually becoming the main computational bottleneck. This is because, so
far, the usage of low-precision accumulators led to a significant degradation
in performance. In this work, we present a simple method to train and fine-tune
high-end DNNs, to allow, for the first time, utilization of cheaper, $12$-bits
accumulators, with no significant degradation in accuracy. Lastly, we show that
as we decrease the accumulation precision further, using fine-grained gradient
approximations can improve the DNN accuracy.
Related papers
- Using Half-Precision for GNN Training [1.7117325236320966]
We introduce HalfGNN, a half-precision based GNN system for Deep Learning.
New vector operations improve data load and reduction performance, and discretized SpMM overcomes the value overflow.
HalfGNN achieves on average of 2.30X speedup in training time over DGL (float-based) for GAT, GCN, and GIN respectively while achieving similar accuracy, and saving 2.67X memory.
arXiv Detail & Related papers (2024-11-02T02:14:02Z) - Gradient-based Automatic Mixed Precision Quantization for Neural Networks On-Chip [0.9187138676564589]
We present High Granularity Quantization (HGQ), an innovative quantization-aware training method.
HGQ fine-tune the per-weight and per-activation precision by making them optimizable through gradient descent.
This approach enables ultra-low latency and low power neural networks on hardware capable of performing arithmetic operations.
arXiv Detail & Related papers (2024-05-01T17:18:46Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - LightNorm: Area and Energy-Efficient Batch Normalization Hardware for
On-Device DNN Training [0.31806743741013654]
We present an extremely efficient batch normalization, named LightNorm, and its associated hardware module.
In more detail, we fuse three approximation techniques that are i) low bit-precision, ii) range batch normalization, andiii) block floating point.
arXiv Detail & Related papers (2022-11-04T18:08:57Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - FTBNN: Rethinking Non-linearity for 1-bit CNNs and Going Beyond [23.5996182207431]
We show that binarized convolution process owns an increasing linearity towards the target of minimizing such error, which in turn hampers BNN's discriminative ability.
We re-investigate and tune proper non-linear modules to fix that contradiction, leading to a strong baseline which achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-10-19T08:11:48Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality
Regularization and Singular Value Sparsification [53.50708351813565]
We propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy.
arXiv Detail & Related papers (2020-04-20T02:40:43Z) - Gradient Centralization: A New Optimization Technique for Deep Neural
Networks [74.935141515523]
gradient centralization (GC) operates directly on gradients by centralizing the gradient vectors to have zero mean.
GC can be viewed as a projected gradient descent method with a constrained loss function.
GC is very simple to implement and can be easily embedded into existing gradient based DNNs with only one line of code.
arXiv Detail & Related papers (2020-04-03T10:25:00Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.