Enabling Deep Learning on Edge Devices through Filter Pruning and
Knowledge Transfer
- URL: http://arxiv.org/abs/2201.10947v1
- Date: Sat, 22 Jan 2022 00:27:21 GMT
- Title: Enabling Deep Learning on Edge Devices through Filter Pruning and
Knowledge Transfer
- Authors: Kaiqi Zhao, Yitao Chen, Ming Zhao
- Abstract summary: The paper proposes a novel filter-pruning-based model compression method to create lightweight trainable models from large models trained in the cloud.
Second, it proposes a novel knowledge transfer method to enable the on-device model to update incrementally in real time or near real time.
The results show that 1) our model compression method can remove up to 99.36% parameters of WRN-28-10, while preserving a Top-1 accuracy of over 90% on CIFAR-10.
- Score: 5.239675888749389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models have introduced various intelligent applications to edge
devices, such as image classification, speech recognition, and augmented
reality. There is an increasing need of training such models on the devices in
order to deliver personalized, responsive, and private learning. To address
this need, this paper presents a new solution for deploying and training
state-of-the-art models on the resource-constrained devices. First, the paper
proposes a novel filter-pruning-based model compression method to create
lightweight trainable models from large models trained in the cloud, without
much loss of accuracy. Second, it proposes a novel knowledge transfer method to
enable the on-device model to update incrementally in real time or near real
time using incremental learning on new data and enable the on-device model to
learn the unseen categories with the help of the in-cloud model in an
unsupervised fashion. The results show that 1) our model compression method can
remove up to 99.36% parameters of WRN-28-10, while preserving a Top-1 accuracy
of over 90% on CIFAR-10; 2) our knowledge transfer method enables the
compressed models to achieve more than 90% accuracy on CIFAR-10 and retain good
accuracy on old categories; 3) it allows the compressed models to converge
within real time (three to six minutes) on the edge for incremental learning
tasks; 4) it enables the model to classify unseen categories of data (78.92%
Top-1 accuracy) that it is never trained with.
Related papers
- Approximating Language Model Training Data from Weights [70.08614275061689]
We formalize the problem of data approximation from model weights and propose several baselines and metrics.<n>We develop a gradient-based approach that selects the highest-matching data from a large public text corpus.<n>Even when none of the true training data is known, our method is able to locate a small subset of public Web documents.
arXiv Detail & Related papers (2025-06-18T15:26:43Z) - Building Efficient Lightweight CNN Models [0.0]
Convolutional Neural Networks (CNNs) are pivotal in image classification tasks due to their robust feature extraction capabilities.
This paper introduces a methodology to construct lightweight CNNs while maintaining competitive accuracy.
The proposed model achieved a state-of-the-art accuracy of 99% on the handwritten digit MNIST and 89% on fashion MNIST, with only 14,862 parameters and a model size of 0.17 MB.
arXiv Detail & Related papers (2025-01-26T14:39:01Z) - CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble [4.029642441688877]
We propose a data-free model extraction approach, CaBaGe, to achieve higher model extraction accuracy with a small number of queries.
Our evaluation shows that CaBaGe outperforms existing techniques on seven datasets.
arXiv Detail & Related papers (2024-09-16T18:19:19Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Developing a Resource-Constraint EdgeAI model for Surface Defect
Detection [1.338174941551702]
We propose a lightweight EdgeAI architecture modified from Xception for on-device training in a resource-constraint edge environment.
We evaluate our model on a PCB defect detection task and compare its performance against existing lightweight models.
Our method can be applied to other resource-constraint applications while maintaining significant performance.
arXiv Detail & Related papers (2023-12-04T15:28:31Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Deep learning model compression using network sensitivity and gradients [3.52359746858894]
We present model compression algorithms for both non-retraining and retraining conditions.
In the first case, we propose the Bin & Quant algorithm for compression of the deep learning models using the sensitivity of the network parameters.
In the second case, we propose our novel gradient-weighted k-means clustering algorithm (GWK)
arXiv Detail & Related papers (2022-10-11T03:02:40Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Revisiting Classifier: Transferring Vision-Language Models for Video
Recognition [102.93524173258487]
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research.
In this study, we focus on transferring knowledge for video classification tasks.
We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning.
arXiv Detail & Related papers (2022-07-04T10:00:47Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep
Neural Networks [7.687838702806964]
We propose an approach, dubbed as DeepObliviate, to implement machine unlearning efficiently.
Our approach improves the original training process by storing intermediate models on the hard disk.
Compared to the method of retraining from scratch, our approach can achieve 99.0%, 95.0%, 91.9%, 96.7%, 74.1% accuracy rates and 66.7$times$, 75.0$times$, 33.3$times$, 29.4$times$, 13.7$times$ speedups.
arXiv Detail & Related papers (2021-05-13T12:02:04Z) - An Efficient Method of Training Small Models for Regression Problems
with Knowledge Distillation [1.433758865948252]
We propose a new formalism of knowledge distillation for regression problems.
First, we propose a new loss function, teacher outlier loss rejection, which rejects outliers in training samples using teacher model predictions.
By considering the multi-task network, training of the feature extraction of student models becomes more effective.
arXiv Detail & Related papers (2020-02-28T08:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.