Magpie: Automatically Tuning Static Parameters for Distributed File
Systems using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2207.09298v1
- Date: Tue, 19 Jul 2022 14:32:07 GMT
- Title: Magpie: Automatically Tuning Static Parameters for Distributed File
Systems using Deep Reinforcement Learning
- Authors: Houkun Zhu, Dominik Scheinert, Lauritz Thamsen, Kordian Gontarska, and
Odej Kao
- Abstract summary: Magpie is a novel approach to tune static parameters in distributed file systems.
We show that Magpie can noticeably improve the performance of the distributed file system Lustre.
- Score: 0.06524460254566904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed file systems are widely used nowadays, yet using their default
configurations is often not optimal. At the same time, tuning configuration
parameters is typically challenging and time-consuming. It demands expertise
and tuning operations can also be expensive. This is especially the case for
static parameters, where changes take effect only after a restart of the system
or workloads. We propose a novel approach, Magpie, which utilizes deep
reinforcement learning to tune static parameters by strategically exploring and
exploiting configuration parameter spaces. To boost the tuning of the static
parameters, our method employs both server and client metrics of distributed
file systems to understand the relationship between static parameters and
performance. Our empirical evaluation results show that Magpie can noticeably
improve the performance of the distributed file system Lustre, where our
approach on average achieves 91.8% throughput gains against default
configuration after tuning towards single performance indicator optimization,
while it reaches 39.7% more throughput gains against the baseline.
Related papers
- Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - Parameter-Efficient Fine-Tuning without Introducing New Latency [7.631596468553607]
We introduce a novel adapter technique that directly applies the adapter to pre-trained parameters instead of the hidden representation.
Our proposed method attains a new state-of-the-art outcome in terms of both performance and storage efficiency, storing only 0.03% parameters of full fine-tuning.
arXiv Detail & Related papers (2023-05-26T08:44:42Z) - Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning [91.5113227694443]
We propose a novel visual.
sensuous-aware fine-Tuning (SPT) scheme.
SPT allocates trainable parameters to task-specific important positions.
Experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing PEFT methods.
arXiv Detail & Related papers (2023-03-15T12:34:24Z) - MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource
Visual Question Answering [66.05768870785548]
Finetuning pretrained Vision-Language Models (VLMs) has been a prevailing paradigm for achieving state-of-the-art performance in Visual Question Answering (VQA)
Current parameter-efficient tuning methods dramatically reduce the number of tunable parameters, but there still exists a significant performance gap with full finetuning.
We propose MixPHM, a redundancy-aware parameter-efficient tuning method that outperforms full finetuning in low-resource VQA.
arXiv Detail & Related papers (2023-03-02T13:28:50Z) - Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning [126.84770886628833]
Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
arXiv Detail & Related papers (2022-10-17T08:14:49Z) - Tuning Word2vec for Large Scale Recommendation Systems [14.074296985040704]
Word2vec is a powerful machine learning tool that emerged from Natural Lan-guage Processing (NLP)
We show that un-constrained optimization yields an average 221% improvement in hit rate over the parameters.
We demonstrate 138% average improvement in hit rate with aruntime budget-constrained hyper parameter optimization.
arXiv Detail & Related papers (2020-09-24T10:50:19Z) - Sapphire: Automatic Configuration Recommendation for Distributed Storage
Systems [11.713288567936875]
tuning parameters can provide significant performance gains but is a difficult task requiring profound experience and expertise.
We propose an automatic simulation-based approach, Sapphire, to recommend optimal configurations.
Results show that Sapphire significantly boosts Ceph performance to 2.2x compared to the default configuration.
arXiv Detail & Related papers (2020-07-07T06:17:07Z) - Dynamic Parameter Allocation in Parameter Servers [74.250687861348]
We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse.
We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers.
arXiv Detail & Related papers (2020-02-03T11:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.