Dependable Distributed Training of Compressed Machine Learning Models
- URL: http://arxiv.org/abs/2402.14346v1
- Date: Thu, 22 Feb 2024 07:24:26 GMT
- Title: Dependable Distributed Training of Compressed Machine Learning Models
- Authors: Francesco Malandrino and Giuseppe Di Giacomo and Marco Levorato and
Carla Fabiana Chiasserini
- Abstract summary: We propose DepL, a framework for dependable learning orchestration.
It makes high-quality, efficient decisions on (i) the data to leverage for learning, (ii) the models to use and when to switch among them, and (iii) the clusters of nodes, and the resources thereof, to exploit.
We prove that DepL has constant competitive ratio and complexity, and show that it outperforms the state-of-the-art by over 27%.
- Score: 16.403297089086042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing work on the distributed training of machine learning (ML) models
has consistently overlooked the distribution of the achieved learning quality,
focusing instead on its average value. This leads to a poor dependability}of
the resulting ML models, whose performance may be much worse than expected. We
fill this gap by proposing DepL, a framework for dependable learning
orchestration, able to make high-quality, efficient decisions on (i) the data
to leverage for learning, (ii) the models to use and when to switch among them,
and (iii) the clusters of nodes, and the resources thereof, to exploit. For
concreteness, we consider as possible available models a full DNN and its
compressed versions. Unlike previous studies, DepL guarantees that a target
learning quality is reached with a target probability, while keeping the
training cost at a minimum. We prove that DepL has constant competitive ratio
and polynomial complexity, and show that it outperforms the state-of-the-art by
over 27% and closely matches the optimum.
Related papers
- Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution Alignment [10.104085497265004]
We propose Ranking Loss based Knowledge Distillation (RLKD), which encourages consistency of peak predictions between the teacher and student models.
Our method enables the student model to better learn the multi-modal distributions of the teacher model, leading to a significant performance improvement in various downstream tasks.
arXiv Detail & Related papers (2024-09-19T08:06:42Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - On Task Performance and Model Calibration with Supervised and
Self-Ensembled In-Context Learning [71.44986275228747]
In-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs)
However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration)
arXiv Detail & Related papers (2023-12-21T11:55:10Z) - From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning [52.257422715393574]
We introduce a self-guided methodology for Large Language Models (LLMs) to autonomously discern and select cherry samples from open-source datasets.
Our key innovation, the Instruction-Following Difficulty (IFD) metric, emerges as a pivotal metric to identify discrepancies between a model's expected responses and its intrinsic generation capability.
arXiv Detail & Related papers (2023-08-23T09:45:29Z) - CLIPood: Generalizing CLIP to Out-of-Distributions [73.86353105017076]
Contrastive language-image pre-training (CLIP) models have shown impressive zero-shot ability, but the further adaptation of CLIP on downstream tasks undesirably degrades OOD performances.
We propose CLIPood, a fine-tuning method that can adapt CLIP models to OOD situations where both domain shifts and open classes may occur on unseen test data.
Experiments on diverse datasets with different OOD scenarios show that CLIPood consistently outperforms existing generalization techniques.
arXiv Detail & Related papers (2023-02-02T04:27:54Z) - Matching DNN Compression and Cooperative Training with Resources and
Data Availability [20.329698347331075]
How much and when an ML model should be compressed, and em where its training should be executed, are hard decisions to make.
We model the network system focusing on the training of DNNs, formalize the multi-dimensional problem, and formulate an approximate dynamic programming problem.
We prove that PACT's solutions can get as close to the optimum as desired, at the cost of an increased time complexity.
arXiv Detail & Related papers (2022-12-02T09:52:18Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Energy-efficient Training of Distributed DNNs in the Mobile-edge-cloud
Continuum [18.247181241860538]
We address distributed machine learning in multi-tier networks where a heterogeneous set of nodes cooperate to perform a learning task.
We propose a solution concept, called RightTrain, that achieves energy-efficient ML model training, while fulfilling learning time and quality requirements.
Our performance evaluation shows that RightTrain closely matches the optimum and outperforms the state of the art by over 50%.
arXiv Detail & Related papers (2022-02-23T08:35:41Z) - Training Speech Recognition Models with Federated Learning: A
Quality/Cost Framework [4.125187280299247]
We propose using federated learning, a decentralized on-device learning paradigm, to train speech recognition models.
By performing epochs of training on a per-user basis, federated learning must incur the cost of dealing with non-IID data distributions.
arXiv Detail & Related papers (2020-10-29T22:01:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.