It's always personal: Using Early Exits for Efficient On-Device CNN
Personalisation
- URL: http://arxiv.org/abs/2102.01393v1
- Date: Tue, 2 Feb 2021 09:10:17 GMT
- Title: It's always personal: Using Early Exits for Efficient On-Device CNN
Personalisation
- Authors: Ilias Leontiadis, Stefanos Laskaridis, Stylianos I. Venieris, Nicholas
D. Lane
- Abstract summary: On-device machine learning is becoming a reality thanks to the availability of powerful hardware and model compression techniques.
In this work, we observe that a much smaller, personalised model can be employed to fit a specific scenario.
We introduce PershonEPEE, a framework that attaches early exits on the model and personalises them on-device.
- Score: 19.046126301352274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-device machine learning is becoming a reality thanks to the availability
of powerful hardware and model compression techniques. Typically, these models
are pretrained on large GPU clusters and have enough parameters to generalise
across a wide variety of inputs. In this work, we observe that a much smaller,
personalised model can be employed to fit a specific scenario, resulting in
both higher accuracy and faster execution. Nevertheless, on-device training is
extremely challenging, imposing excessive computational and memory requirements
even for flagship smartphones. At the same time, on-device data availability
might be limited and samples are most frequently unlabelled. To this end, we
introduce PersEPhonEE, a framework that attaches early exits on the model and
personalises them on-device. These allow the model to progressively bypass a
larger part of the computation as more personalised data become available.
Moreover, we introduce an efficient on-device algorithm that trains the early
exits in a semi-supervised manner at a fraction of the whole network's
personalisation time. Results show that PersEPhonEE boosts accuracy by up to
15.9% while dropping the training cost by up to 2.2x and inference latency by
2.2-3.2x on average for the same accuracy, depending on the availability of
labels on-device.
Related papers
- Cross-Architecture Auxiliary Feature Space Translation for Efficient Few-Shot Personalized Object Detection [28.06000586370357]
We propose an instance-level personalized object detection strategy called AuXFT.
We show that AuXFT reaches a performance of 80% its upper bound at just 32% of the inference time.
We validate AuXFT on three publicly available datasets and one in-house benchmark designed for the IPOD task.
arXiv Detail & Related papers (2024-07-01T11:33:53Z) - DAISY: Data Adaptive Self-Supervised Early Exit for Speech Representation Models [55.608981341747246]
We introduce Data Adaptive Self-Supervised Early Exit (DAISY), an approach that decides when to exit based on the self-supervised loss.
Our analysis on the adaptivity of DAISY shows that the model exits early (using fewer layers) on clean data while exits late (using more layers) on noisy data.
arXiv Detail & Related papers (2024-06-08T12:58:13Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - On-Device Training Under 256KB Memory [62.95579393237751]
We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory.
Our framework is the first solution to enable tiny on-device training of convolutional neural networks under 256KB and 1MB Flash.
arXiv Detail & Related papers (2022-06-30T17:59:08Z) - Building a Performance Model for Deep Learning Recommendation Model
Training on GPUs [6.05245376098191]
We devise a performance model for GPU training of Deep Learning Recommendation Models (DLRM)
We show that both the device active time (the sum of kernel runtimes) and the device idle time are important components of the overall device time.
We propose a critical-path-based algorithm to predict the per-batch training time of DLRM by traversing its execution graph.
arXiv Detail & Related papers (2022-01-19T19:05:42Z) - SSSE: Efficiently Erasing Samples from Trained Machine Learning Models [103.43466657962242]
We propose an efficient and effective algorithm, SSSE, for samples erasure.
In certain cases SSSE can erase samples almost as well as the optimal, yet impractical, gold standard of training a new model from scratch with only the permitted data.
arXiv Detail & Related papers (2021-07-08T14:17:24Z) - Real-Time Execution of Large-scale Language Models on Mobile [49.32610509282623]
We find the best model structure of BERT for a given computation size to match specific devices.
Our framework can guarantee the identified model to meet both resource and real-time specifications of mobile devices.
Specifically, our model is 5.2x faster on CPU and 4.1x faster on GPU with 0.5-2% accuracy loss compared with BERT-base.
arXiv Detail & Related papers (2020-09-15T01:59:17Z) - Multi-node Bert-pretraining: Cost-efficient Approach [6.5998084177955425]
Large scale Transformer-based language models have brought about exciting leaps in state-of-the-art results for many Natural Language Processing (NLP) tasks.
With the advent of large-scale unsupervised datasets, training time is further extended due to the increased amount of data samples within a single training epoch.
We show that we are able to perform pre-training on BERT within a reasonable time budget (12 days) in an academic setting.
arXiv Detail & Related papers (2020-08-01T05:49:20Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.