Efficient Fine-Tuning of BERT Models on the Edge
- URL: http://arxiv.org/abs/2205.01541v1
- Date: Tue, 3 May 2022 14:51:53 GMT
- Title: Efficient Fine-Tuning of BERT Models on the Edge
- Authors: Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J.
Clark, Brett H. Meyer and Warren J. Gross
- Abstract summary: We propose Freeze And Reconfigure (FAR), a memory-efficient training regime for BERT-like models.
FAR reduces fine-tuning time on the DistilBERT model and CoLA dataset by 30%, and time spent on memory operations by 47%.
More broadly, reductions in metric performance on the GLUE and SQuAD datasets are around 1% on average.
- Score: 12.768368718187428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Resource-constrained devices are increasingly the deployment targets of
machine learning applications. Static models, however, do not always suffice
for dynamic environments. On-device training of models allows for quick
adaptability to new scenarios. With the increasing size of deep neural
networks, as noted with the likes of BERT and other natural language processing
models, comes increased resource requirements, namely memory, computation,
energy, and time. Furthermore, training is far more resource intensive than
inference. Resource-constrained on-device learning is thus doubly difficult,
especially with large BERT-like models. By reducing the memory usage of
fine-tuning, pre-trained BERT models can become efficient enough to fine-tune
on resource-constrained devices. We propose Freeze And Reconfigure (FAR), a
memory-efficient training regime for BERT-like models that reduces the memory
usage of activation maps during fine-tuning by avoiding unnecessary parameter
updates. FAR reduces fine-tuning time on the DistilBERT model and CoLA dataset
by 30%, and time spent on memory operations by 47%. More broadly, reductions in
metric performance on the GLUE and SQuAD datasets are around 1% on average.
Related papers
- Edge Unlearning is Not "on Edge"! An Adaptive Exact Unlearning System on Resource-Constrained Devices [26.939025828011196]
The right to be forgotten mandates that machine learning models enable the erasure of a data owner's data and information from a trained model.
We propose a Constraint-aware Adaptive Exact Unlearning System at the network Edge (CAUSE) to enable exact unlearning on resource-constrained devices.
arXiv Detail & Related papers (2024-10-14T03:28:09Z) - Fine-Tuning and Deploying Large Language Models Over Edges: Issues and Approaches [64.42735183056062]
Large language models (LLMs) have transitioned from specialized models to versatile foundation models.
LLMs exhibit impressive zero-shot ability, however, require fine-tuning on local datasets and significant resources for deployment.
arXiv Detail & Related papers (2024-08-20T09:42:17Z) - Memory-efficient Energy-adaptive Inference of Pre-Trained Models on Batteryless Embedded Systems [0.0]
Batteryless systems often face power failures, requiring extra runtime buffers to maintain progress and leaving only a memory space for storing ultra-tiny deep neural networks (DNNs)
We propose FreeML, a framework to optimize pre-trained DNN models for memory-efficient and energy-adaptive inference on batteryless systems.
Our experiments showed that FreeML reduces the model sizes by up to $95 times$, supports adaptive inference with a $2.03-19.65 times$ less memory overhead, and provides significant time and energy benefits with only a negligible accuracy drop compared to the state-of-the-art
arXiv Detail & Related papers (2024-05-16T20:16:45Z) - Block Selective Reprogramming for On-device Training of Vision Transformers [12.118303034660531]
We present block selective reprogramming (BSR) in which we fine-tune only a fraction of total blocks of a pre-trained model.
Compared to the existing alternatives, our approach simultaneously reduces training memory by up to 1.4x and compute cost by up to 2x.
arXiv Detail & Related papers (2024-03-25T08:41:01Z) - Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model [89.8764435351222]
We propose a new family of unbiased estimators called WTA-CRS, for matrix production with reduced variance.
Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.
arXiv Detail & Related papers (2023-05-24T15:52:08Z) - MTrainS: Improving DLRM training efficiency using heterogeneous memories [5.195887979684162]
In Deep Learning Recommendation Models (DLRM), sparse features capturing categorical inputs through embedding tables are the major contributors to model size and require high memory bandwidth.
In this paper, we study the bandwidth requirement and locality of embedding tables in real-world deployed models.
We then design MTrainS, which leverages heterogeneous memory, including byte and block addressable Storage Class Memory for DLRM hierarchically.
arXiv Detail & Related papers (2023-04-19T06:06:06Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided
Adaptation [68.30497162547768]
We propose MoEBERT, which uses a Mixture-of-Experts structure to increase model capacity and inference speed.
We validate the efficiency and effectiveness of MoEBERT on natural language understanding and question answering tasks.
arXiv Detail & Related papers (2022-04-15T23:19:37Z) - bert2BERT: Towards Reusable Pretrained Language Models [51.078081486422896]
We propose bert2BERT, which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model.
bert2BERT saves about 45% and 47% computational cost of pre-training BERT_BASE and GPT_BASE by reusing the models of almost their half sizes.
arXiv Detail & Related papers (2021-10-14T04:05:25Z) - RCT: Resource Constrained Training for Edge AI [35.11160947555767]
Existing training methods for compact models are designed to run on powerful servers with abundant memory and energy budget.
We propose Resource Constrained Training (RCT) to mitigate these issues.
RCT only keeps a quantised model adjusts throughout the training, so that the memory requirements for model parameters in training is reduced.
arXiv Detail & Related papers (2021-03-26T14:33:31Z) - Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation [55.34995029082051]
We propose a method to learn to augment for data-scarce domain BERT knowledge distillation.
We show that the proposed method significantly outperforms state-of-the-art baselines on four different tasks.
arXiv Detail & Related papers (2021-01-20T13:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.