Sapphire: Automatic Configuration Recommendation for Distributed Storage
Systems
- URL: http://arxiv.org/abs/2007.03220v1
- Date: Tue, 7 Jul 2020 06:17:07 GMT
- Title: Sapphire: Automatic Configuration Recommendation for Distributed Storage
Systems
- Authors: Wenhao Lyu, Youyou Lu, Jiwu Shu, Wei Zhao
- Abstract summary: tuning parameters can provide significant performance gains but is a difficult task requiring profound experience and expertise.
We propose an automatic simulation-based approach, Sapphire, to recommend optimal configurations.
Results show that Sapphire significantly boosts Ceph performance to 2.2x compared to the default configuration.
- Score: 11.713288567936875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern distributed storage systems come with aplethora of configurable
parameters that controlmodule behavior and affect system performance. Default
settings provided by developers are often suboptimal for specific user cases.
Tuning parameters can provide significant performance gains but is a difficult
task requiring profound experience and expertise, due to the immense number of
configurable parameters, complex inner dependencies and non-linearsystem
behaviors. To overcome these difficulties, we propose an automatic
simulation-based approach, Sapphire, to recommend optimal configurations by
leveraging machine learning and black-box optimization techniques. We evaluate
Sapphire on Ceph. Results show that Sapphire significantly boosts Ceph
performance to 2.2x compared to the default configuration.
Related papers
- SigOpt Mulch: An Intelligent System for AutoML of Gradient Boosted Trees [3.6449336503217786]
Gradient boosted trees (GBTs) are ubiquitous models used by researchers, machine learning (ML) practitioners, and data scientists.
We present SigOpt Mulch, a model-aware hyperparameter tuning system specifically designed for automated tuning of GBTs.
arXiv Detail & Related papers (2023-07-10T18:40:25Z) - CAMEO: A Causal Transfer Learning Approach for Performance Optimization
of Configurable Computer Systems [16.75106122540052]
We propose CAMEO, a method that identifies invariant causal predictors under environmental changes.
We demonstrate significant performance improvements over state-of-the-art optimization methods in MLperf deep learning systems, a video analytics pipeline, and a database system.
arXiv Detail & Related papers (2023-06-13T16:28:37Z) - Online Continuous Hyperparameter Optimization for Generalized Linear Contextual Bandits [55.03293214439741]
In contextual bandits, an agent sequentially makes actions from a time-dependent action set based on past experience.
We propose the first online continuous hyperparameter tuning framework for contextual bandits.
We show that it could achieve a sublinear regret in theory and performs consistently better than all existing methods on both synthetic and real datasets.
arXiv Detail & Related papers (2023-02-18T23:31:20Z) - AutoPEFT: Automatic Configuration Search for Parameter-Efficient
Fine-Tuning [77.61565726647784]
Motivated by advances in neural architecture search, we propose AutoPEFT for automatic PEFT configuration selection.
We show that AutoPEFT-discovered configurations significantly outperform existing PEFT methods and are on par or better than FFT without incurring substantial training efficiency costs.
arXiv Detail & Related papers (2023-01-28T08:51:23Z) - On Controller Tuning with Time-Varying Bayesian Optimization [74.57758188038375]
We will use time-varying optimization (TVBO) to tune controllers online in changing environments using appropriate prior knowledge on the control objective and its changes.
We propose a novel TVBO strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes.
Our model outperforms the state-of-the-art method in TVBO, exhibiting reduced regret and fewer unstable parameter configurations.
arXiv Detail & Related papers (2022-07-22T14:54:13Z) - Magpie: Automatically Tuning Static Parameters for Distributed File
Systems using Deep Reinforcement Learning [0.06524460254566904]
Magpie is a novel approach to tune static parameters in distributed file systems.
We show that Magpie can noticeably improve the performance of the distributed file system Lustre.
arXiv Detail & Related papers (2022-07-19T14:32:07Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - RecPipe: Co-designing Models and Hardware to Jointly Optimize
Recommendation Quality and Performance [6.489720534548981]
RecPipe is a system to jointly optimize recommendation quality and inference performance.
RPAccel is a custom accelerator that jointly optimize quality, tail-latency, and system throughput.
arXiv Detail & Related papers (2021-05-18T20:44:04Z) - Amazon SageMaker Automatic Model Tuning: Scalable Black-box Optimization [23.52446054521187]
Amazon SageMaker Automatic Model Tuning (AMT) is a fully managed system for black-box optimization at scale.
AMT finds the best version of a machine learning model by repeatedly training it with different hyperparameter configurations.
It can be used with built-in algorithms, custom algorithms, and Amazon SageMaker pre-built containers for machine learning frameworks.
arXiv Detail & Related papers (2020-12-15T18:34:34Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Optimal non-classical correlations of light with a levitated nano-sphere [48.7576911714538]
Nonclassical correlations provide a resource for many applications in quantum technology.
Optomechanical systems can be arranged to generate quantum entanglement between the mechanics and a mode of travelling light.
We propose automated optimisation of the production of quantum correlations in such a system.
arXiv Detail & Related papers (2020-06-26T15:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.