Enabling On-device Continual Learning with Binary Neural Networks
- URL: http://arxiv.org/abs/2401.09916v1
- Date: Thu, 18 Jan 2024 11:57:05 GMT
- Title: Enabling On-device Continual Learning with Binary Neural Networks
- Authors: Lorenzo Vorabbi, Davide Maltoni, Guido Borghi, Stefano Santi
- Abstract summary: We propose a solution that combines recent advancements in the field of Continual Learning (CL) and Binary Neural Networks (BNNs)
Specifically, our approach leverages binary latent replay activations and a novel quantization scheme that significantly reduces the number of bits required for gradient computation.
- Score: 3.180732240499359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-device learning remains a formidable challenge, especially when dealing
with resource-constrained devices that have limited computational capabilities.
This challenge is primarily rooted in two key issues: first, the memory
available on embedded devices is typically insufficient to accommodate the
memory-intensive back-propagation algorithm, which often relies on
floating-point precision. Second, the development of learning algorithms on
models with extreme quantization levels, such as Binary Neural Networks (BNNs),
is critical due to the drastic reduction in bit representation. In this study,
we propose a solution that combines recent advancements in the field of
Continual Learning (CL) and Binary Neural Networks to enable on-device training
while maintaining competitive performance. Specifically, our approach leverages
binary latent replay (LR) activations and a novel quantization scheme that
significantly reduces the number of bits required for gradient computation. The
experimental validation demonstrates a significant accuracy improvement in
combination with a noticeable reduction in memory requirement, confirming the
suitability of our approach in expanding the practical applications of deep
learning in real-world scenarios.
Related papers
- On-Device Learning with Binary Neural Networks [2.7040098749051635]
We propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN)
The choice of a binary network as backbone is essential to meet the constraints of low power devices.
arXiv Detail & Related papers (2023-08-29T13:48:35Z) - Binary stochasticity enabled highly efficient neuromorphic deep learning
achieves better-than-software accuracy [17.11946381948498]
Deep learning needs high-precision handling of forwarding signals, backpropagating errors, and updating weights.
It is challenging to implement deep learning in hardware systems that use noisy analog memristors as artificial synapses.
We propose a binary learning algorithm that modifies all elementary neural network operations.
arXiv Detail & Related papers (2023-04-25T14:38:36Z) - Training Integer-Only Deep Recurrent Neural Networks [3.1829446824051195]
We present a quantization-aware training method for obtaining a highly accurate integer-only recurrent neural network (iRNN)
Our approach supports layer normalization, attention, and an adaptive piecewise linear (PWL) approximation of activation functions.
The proposed method enables RNN-based language models to run on edge devices with $2times$ improvement in runtime.
arXiv Detail & Related papers (2022-12-22T15:22:36Z) - Neural Networks with Quantization Constraints [111.42313650830248]
We present a constrained learning approach to quantization training.
We show that the resulting problem is strongly dual and does away with gradient estimations.
We demonstrate that the proposed approach exhibits competitive performance in image classification tasks.
arXiv Detail & Related papers (2022-10-27T17:12:48Z) - Distribution-sensitive Information Retention for Accurate Binary Neural
Network [49.971345958676196]
We present a novel Distribution-sensitive Information Retention Network (DIR-Net) to retain the information of the forward activations and backward gradients.
Our DIR-Net consistently outperforms the SOTA binarization approaches under mainstream and compact architectures.
We conduct our DIR-Net on real-world resource-limited devices which achieves 11.1 times storage saving and 5.4 times speedup.
arXiv Detail & Related papers (2021-09-25T10:59:39Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Enabling Binary Neural Network Training on the Edge [7.32770338248516]
Existing binary neural network training methods require concurrent storage of high-precision activations for all layers.
We introduce a low-cost binary neural network training strategy exhibiting sizable memory footprint reductions.
We also demonstrate from-scratch ImageNet training of binarized ResNet-18, achieving a 3.78$times$ memory reduction.
arXiv Detail & Related papers (2021-02-08T15:06:41Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Exploring the Connection Between Binary and Spiking Neural Networks [1.329054857829016]
We bridge the recent algorithmic progress in training Binary Neural Networks and Spiking Neural Networks.
We show that training Spiking Neural Networks in the extreme quantization regime results in near full precision accuracies on large-scale datasets.
arXiv Detail & Related papers (2020-02-24T03:46:51Z) - Towards Unified INT8 Training for Convolutional Neural Network [83.15673050981624]
We build a unified 8-bit (INT8) training framework for common convolutional neural networks.
First, we empirically find the four distinctive characteristics of gradients, which provide us insightful clues for gradient quantization.
We propose two universal techniques, including Direction Sensitive Gradient Clipping that reduces the direction deviation of gradients.
arXiv Detail & Related papers (2019-12-29T08:37:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.