NITI: Training Integer Neural Networks Using Integer-only Arithmetic
- URL: http://arxiv.org/abs/2009.13108v2
- Date: Fri, 11 Feb 2022 11:08:39 GMT
- Title: NITI: Training Integer Neural Networks Using Integer-only Arithmetic
- Authors: Maolin Wang, Seyedramin Rasoulinezhad, Philip H.W. Leong, Hayden K.H.
So
- Abstract summary: We present NITI, an efficient deep neural network training framework that computes exclusively with integer arithmetic.
A proof-of-concept open-source software implementation of NITI that utilizes native 8-bit integer operations is presented.
NITI achieves negligible accuracy degradation on the MNIST and CIFAR10 datasets using 8-bit integer storage and computation.
- Score: 4.361357921751159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While integer arithmetic has been widely adopted for improved performance in
deep quantized neural network inference, training remains a task primarily
executed using floating point arithmetic. This is because both high dynamic
range and numerical accuracy are central to the success of most modern training
algorithms. However, due to its potential for computational, storage and energy
advantages in hardware accelerators, neural network training methods that can
be implemented with low precision integer-only arithmetic remains an active
research challenge. In this paper, we present NITI, an efficient deep neural
network training framework that stores all parameters and intermediate values
as integers, and computes exclusively with integer arithmetic. A pseudo
stochastic rounding scheme that eliminates the need for external random number
generation is proposed to facilitate conversion from wider intermediate results
to low precision storage. Furthermore, a cross-entropy loss backpropagation
scheme computed with integer-only arithmetic is proposed. A proof-of-concept
open-source software implementation of NITI that utilizes native 8-bit integer
operations in modern GPUs to achieve end-to-end training is presented. When
compared with an equivalent training setup implemented with floating point
storage and arithmetic, NITI achieves negligible accuracy degradation on the
MNIST and CIFAR10 datasets using 8-bit integer storage and computation. On
ImageNet, 16-bit integers are needed for weight accumulation with an 8-bit
datapath. This achieves training results comparable to all-floating-point
implementations.
Related papers
- NITRO-D: Native Integer-only Training of Deep Convolutional Neural Networks [2.6230959823681834]
This work introduces NITRO-D, a new framework for training arbitrarily deep integer-only Convolutional Neural Networks (CNNs)
NiTRO-D is the first framework in the literature enabling the training of integer-only CNNs without the need to introduce a quantization scheme.
arXiv Detail & Related papers (2024-07-16T13:16:49Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Is Integer Arithmetic Enough for Deep Learning Training? [2.9136421025415205]
replacing floating-point arithmetic with low-bit integer arithmetic is a promising approach to save energy, memory footprint, and latency of deep learning models.
We propose a fully functional integer training pipeline including forward pass, back-propagation, and gradient descent.
Our experimental results show that our proposed method is effective in a wide variety of tasks such as classification (including vision transformers), object detection, and semantic segmentation.
arXiv Detail & Related papers (2022-07-18T22:36:57Z) - Efficient and Robust Mixed-Integer Optimization Methods for Training
Binarized Deep Neural Networks [0.07614628596146598]
We study deep neural networks with binary activation functions and continuous or integer weights (BDNN)
We show that the BDNN can be reformulated as a mixed-integer linear program with bounded weight space which can be solved to global optimality by classical mixed-integer programming solvers.
For the first time a robust model is presented which enforces robustness of the BDNN during training.
arXiv Detail & Related papers (2021-10-21T18:02:58Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - A Survey of Quantization Methods for Efficient Neural Network Inference [75.55159744950859]
quantization is the problem of distributing continuous real-valued numbers over a fixed discrete set of numbers to minimize the number of bits required.
It has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas.
Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x.
arXiv Detail & Related papers (2021-03-25T06:57:11Z) - HAWQV3: Dyadic Neural Network Quantization [73.11579145354801]
Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values.
We present HAWQV3, a novel mixed-precision integer-only quantization framework.
arXiv Detail & Related papers (2020-11-20T23:51:43Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Shifted and Squeezed 8-bit Floating Point format for Low-Precision
Training of Deep Neural Networks [13.929168096016957]
We introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers.
Reduced bit precision allows for a larger effective memory and increased computational speed.
We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models.
arXiv Detail & Related papers (2020-01-16T06:38:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.