RCT: Resource Constrained Training for Edge AI
- URL: http://arxiv.org/abs/2103.14493v1
- Date: Fri, 26 Mar 2021 14:33:31 GMT
- Title: RCT: Resource Constrained Training for Edge AI
- Authors: Tian Huang, Tao Luo, Ming Yan, Joey Tianyi Zhou, Rick Goh
- Abstract summary: Existing training methods for compact models are designed to run on powerful servers with abundant memory and energy budget.
We propose Resource Constrained Training (RCT) to mitigate these issues.
RCT only keeps a quantised model adjusts throughout the training, so that the memory requirements for model parameters in training is reduced.
- Score: 35.11160947555767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks training on edge terminals is essential for edge AI
computing, which needs to be adaptive to evolving environment. Quantised models
can efficiently run on edge devices, but existing training methods for these
compact models are designed to run on powerful servers with abundant memory and
energy budget. For example, quantisation-aware training (QAT) method involves
two copies of model parameters, which is usually beyond the capacity of on-chip
memory in edge devices. Data movement between off-chip and on-chip memory is
energy demanding as well. The resource requirements are trivial for powerful
servers, but critical for edge devices. To mitigate these issues, We propose
Resource Constrained Training (RCT). RCT only keeps a quantised model
throughout the training, so that the memory requirements for model parameters
in training is reduced. It adjusts per-layer bitwidth dynamically in order to
save energy when a model can learn effectively with lower precision. We carry
out experiments with representative models and tasks in image application and
natural language processing. Experiments show that RCT saves more than 86\%
energy for General Matrix Multiply (GEMM) and saves more than 46\% memory for
model parameters, with limited accuracy loss. Comparing with QAT-based method,
RCT saves about half of energy on moving model parameters.
Related papers
- Block Selective Reprogramming for On-device Training of Vision Transformers [12.118303034660531]
We present block selective reprogramming (BSR) in which we fine-tune only a fraction of total blocks of a pre-trained model.
Compared to the existing alternatives, our approach simultaneously reduces training memory by up to 1.4x and compute cost by up to 2x.
arXiv Detail & Related papers (2024-03-25T08:41:01Z) - Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning [19.17362588650503]
Low-rank Attention Side-Tuning (LAST) trains a side-network composed of only low-rank self-attention modules.
We show LAST can be highly parallel across multiple optimization objectives, making it very efficient in downstream task adaptation.
arXiv Detail & Related papers (2024-02-06T14:03:15Z) - Time-, Memory- and Parameter-Efficient Visual Adaptation [75.28557015773217]
We propose an adaptation method which does not backpropagate gradients through the backbone.
We achieve this by designing a lightweight network in parallel that operates on features from the frozen, pretrained backbone.
arXiv Detail & Related papers (2024-02-05T10:55:47Z) - READ: Recurrent Adaptation of Large Transformers [7.982905666062059]
Fine-tuning large-scale Transformers becomes impractical as the model size and number of tasks increase.
We introduce textbfREcurrent textbfADaption (READ) -- a lightweight and memory-efficient fine-tuning method.
arXiv Detail & Related papers (2023-05-24T16:59:41Z) - POET: Training Neural Networks on Tiny Devices with Integrated
Rematerialization and Paging [35.397804171588476]
Fine-tuning models on edge devices would enable privacy-preserving personalization over sensitive data.
We present POET, an algorithm to enable training large neural networks on memory-scarce battery-operated edge devices.
arXiv Detail & Related papers (2022-07-15T18:36:29Z) - On-Device Training Under 256KB Memory [62.95579393237751]
We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory.
Our framework is the first solution to enable tiny on-device training of convolutional neural networks under 256KB and 1MB Flash.
arXiv Detail & Related papers (2022-06-30T17:59:08Z) - Efficient Fine-Tuning of BERT Models on the Edge [12.768368718187428]
We propose Freeze And Reconfigure (FAR), a memory-efficient training regime for BERT-like models.
FAR reduces fine-tuning time on the DistilBERT model and CoLA dataset by 30%, and time spent on memory operations by 47%.
More broadly, reductions in metric performance on the GLUE and SQuAD datasets are around 1% on average.
arXiv Detail & Related papers (2022-05-03T14:51:53Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion
Parameter Pretraining [55.16088793437898]
Training extreme-scale models requires enormous amounts of computes and memory footprint.
We propose a simple training strategy called "Pseudo-to-Real" for high-memory-footprint-required large models.
arXiv Detail & Related papers (2021-10-08T04:24:51Z) - Training Recommender Systems at Scale: Communication-Efficient Model and
Data Parallelism [56.78673028601739]
We propose a compression framework called Dynamic Communication Thresholding (DCT) for communication-efficient hybrid training.
DCT reduces communication by at least $100times$ and $20times$ during DP and MP, respectively.
It improves end-to-end training time for a state-of-the-art industrial recommender model by 37%, without any loss in performance.
arXiv Detail & Related papers (2020-10-18T01:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.