On-Device Learning with Binary Neural Networks
- URL: http://arxiv.org/abs/2308.15308v1
- Date: Tue, 29 Aug 2023 13:48:35 GMT
- Title: On-Device Learning with Binary Neural Networks
- Authors: Lorenzo Vorabbi, Davide Maltoni, Stefano Santi
- Abstract summary: We propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN)
The choice of a binary network as backbone is essential to meet the constraints of low power devices.
- Score: 2.7040098749051635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing Continual Learning (CL) solutions only partially address the
constraints on power, memory and computation of the deep learning models when
deployed on low-power embedded CPUs. In this paper, we propose a CL solution
that embraces the recent advancements in CL field and the efficiency of the
Binary Neural Networks (BNN), that use 1-bit for weights and activations to
efficiently execute deep learning models. We propose a hybrid quantization of
CWR* (an effective CL approach) that considers differently forward and backward
pass in order to retain more precision during gradient update step and at the
same time minimizing the latency overhead. The choice of a binary network as
backbone is essential to meet the constraints of low power devices and, to the
best of authors' knowledge, this is the first attempt to prove on-device
learning with BNN. The experimental validation carried out confirms the
validity and the suitability of the proposed method.
Related papers
- Enabling On-device Continual Learning with Binary Neural Networks [3.180732240499359]
We propose a solution that combines recent advancements in the field of Continual Learning (CL) and Binary Neural Networks (BNNs)
Specifically, our approach leverages binary latent replay activations and a novel quantization scheme that significantly reduces the number of bits required for gradient computation.
arXiv Detail & Related papers (2024-01-18T11:57:05Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Binary Early-Exit Network for Adaptive Inference on Low-Resource Devices [3.591566487849146]
Binary neural networks (BNNs) tackle the issue with extreme compression and speed-up gains compared to real-valued models.
We propose a simple but effective method to accelerate inference through unifying BNNs with an early-exiting strategy.
Our approach allows simple instances to exit early based on a decision threshold and utilizes output layers added to different intermediate layers to avoid executing the entire binary model.
arXiv Detail & Related papers (2022-06-17T22:11:11Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian
Learning [37.376989855065545]
Spiking Neural Networks (SNNs) are biologically inspired, dynamic, event-driven models that enhance energy efficiency.
An SNN model is introduced that combines the benefits of temporally sparse binary activations and of binary weights.
Experiments validate the performance loss with respect to full-precision implementations.
arXiv Detail & Related papers (2020-12-15T14:06:36Z) - FTBNN: Rethinking Non-linearity for 1-bit CNNs and Going Beyond [23.5996182207431]
We show that binarized convolution process owns an increasing linearity towards the target of minimizing such error, which in turn hampers BNN's discriminative ability.
We re-investigate and tune proper non-linear modules to fix that contradiction, leading to a strong baseline which achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-10-19T08:11:48Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.