Variant Parallelism: Lightweight Deep Convolutional Models for
Distributed Inference on IoT Devices
- URL: http://arxiv.org/abs/2210.08376v2
- Date: Sun, 11 Jun 2023 21:10:25 GMT
- Title: Variant Parallelism: Lightweight Deep Convolutional Models for
Distributed Inference on IoT Devices
- Authors: Navidreza Asadi, Maziar Goudarzi
- Abstract summary: Two major techniques are commonly used to meet real-time inference limitations when distributing models across resource-constrained IoT devices.
We propose variant parallelism (VP), an ensemble-based deep learning distribution method where different variants of a main model are generated and can be deployed on separate machines.
Our results demonstrate that our models can have 5.8-7.1x fewer parameters, 4.3-31x fewer multiply-accumulations (MACs), and 2.5-13.2x less response time on atomic inputs compared to MobileNetV2.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Two major techniques are commonly used to meet real-time inference
limitations when distributing models across resource-constrained IoT devices:
(1) model parallelism (MP) and (2) class parallelism (CP). In MP, transmitting
bulky intermediate data (orders of magnitude larger than input) between devices
imposes huge communication overhead. Although CP solves this problem, it has
limitations on the number of sub-models. In addition, both solutions are fault
intolerant, an issue when deployed on edge devices. We propose variant
parallelism (VP), an ensemble-based deep learning distribution method where
different variants of a main model are generated and can be deployed on
separate machines. We design a family of lighter models around the original
model, and train them simultaneously to improve accuracy over single models.
Our experimental results on six common mid-sized object recognition datasets
demonstrate that our models can have 5.8-7.1x fewer parameters, 4.3-31x fewer
multiply-accumulations (MACs), and 2.5-13.2x less response time on atomic
inputs compared to MobileNetV2 while achieving comparable or higher accuracy.
Our technique easily generates several variants of the base architecture. Each
variant returns only 2k outputs 1 <= k <= (#classes/2), representing Top-k
classes, instead of tons of floating point values required in MP. Since each
variant provides a full-class prediction, our approach maintains higher
availability compared with MP and CP in presence of failure.
Related papers
- Priority-Aware Model-Distributed Inference at Edge Networks [6.97067164616875]
Distributed inference techniques can be broadly classified into data-distributed and model-distributed schemes.
In data-distributed inference (DDI), each worker carries the entire Machine Learning (ML) model but processes only a subset of the data.
An emerging paradigm is model-distributed inference (MDI), where each worker carries only a subset of ML layers.
arXiv Detail & Related papers (2024-12-16T22:01:55Z) - Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis [17.989809995141044]
We propose CCA Merge, which is based on Corre Analysis Analysis.
We show that CCA works significantly better than past methods when more than 2 models are merged.
arXiv Detail & Related papers (2024-07-07T14:21:04Z) - Unified Anomaly Detection methods on Edge Device using Knowledge Distillation and Quantization [4.6651371876849]
Most anomaly detection approaches using defect detection employ one-class models that require fitting separate models for each class.
In this work, we experiment with considering a unified multi-class setup.
Our experimental study shows that multi-class models perform at par with one-class models for the standard MVTec AD dataset.
arXiv Detail & Related papers (2024-07-03T10:04:48Z) - MatFormer: Nested Transformer for Elastic Inference [94.1789252941718]
MatFormer is a nested Transformer architecture designed to offer elasticity in a variety of deployment constraints.
We show that a 2.6B decoder-only MatFormer language model (MatLM) allows us to extract smaller models spanning from 1.5B to 2.6B.
We also observe that smaller encoders extracted from a universal MatFormer-based ViT (MatViT) encoder preserve the metric-space structure for adaptive large-scale retrieval.
arXiv Detail & Related papers (2023-10-11T17:57:14Z) - SqueezeLLM: Dense-and-Sparse Quantization [80.32162537942138]
Main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, for single batch inference.
We introduce SqueezeLLM, a post-training quantization framework that enables lossless compression to ultra-low precisions of up to 3-bit.
Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format.
arXiv Detail & Related papers (2023-06-13T08:57:54Z) - AlpaServe: Statistical Multiplexing with Model Parallelism for Deep
Learning Serving [53.01646445659089]
We show that model parallelism can be used for the statistical multiplexing of multiple devices when serving multiple models.
We present a novel serving system, AlpaServe, that determines an efficient strategy for placing and parallelizing collections of large deep learning models.
arXiv Detail & Related papers (2023-02-22T21:41:34Z) - SWARM Parallelism: Training Large Models Can Be Surprisingly
Communication-Efficient [69.61083127540776]
Deep learning applications benefit from using large models with billions of parameters.
Training these models is notoriously expensive due to the need for specialized HPC clusters.
We consider alternative setups for training large models: using cheap "preemptible" instances or pooling existing resources from multiple regions.
arXiv Detail & Related papers (2023-01-27T18:55:19Z) - Scaling Distributed Deep Learning Workloads beyond the Memory Capacity
with KARMA [58.040931661693925]
We propose a strategy that combines redundant recomputing and out-of-core methods.
We achieve an average of 1.52x speedup in six different models over the state-of-the-art out-of-core methods.
Our data parallel out-of-core solution can outperform complex hybrid model parallelism in training large models, e.g. Megatron-LM and Turning-NLG.
arXiv Detail & Related papers (2020-08-26T07:24:34Z) - The Right Tool for the Job: Matching Model and Instance Complexities [62.95183777679024]
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.
We propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit"
We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.
arXiv Detail & Related papers (2020-04-16T04:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.