Do Performance Aspirations Matter for Guiding Software Configuration
Tuning?
- URL: http://arxiv.org/abs/2301.03290v1
- Date: Mon, 9 Jan 2023 12:11:05 GMT
- Title: Do Performance Aspirations Matter for Guiding Software Configuration
Tuning?
- Authors: Tao Chen and Miqing Li
- Abstract summary: We show that the realism of aspirations is the key factor that determines whether they should be used to guide the tuning.
The available tuning budget can also influence the choice for aspirations but it is under insignificant realistic ones.
- Score: 6.492599077364121
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Configurable software systems can be tuned for better performance. Leveraging
on some Pareto optimizers, recent work has shifted from tuning for a single,
time-related performance objective to two intrinsically different objectives
that assess distinct performance aspects of the system, each with varying
aspirations. Before we design better optimizers, a crucial engineering decision
to make therein is how to handle the performance requirements with clear
aspirations in the tuning process. For this, the community takes two
alternative optimization models: either quantifying and incorporating the
aspirations into the search objectives that guide the tuning, or not
considering the aspirations during the search but purely using them in the
later decision-making process only. However, despite being a crucial decision
that determines how an optimizer can be designed and tailored, there is a
rather limited understanding of which optimization model should be chosen under
what particular circumstance, and why.
In this paper, we seek to close this gap. Firstly, we do that through a
review of over 426 papers in the literature and 14 real-world requirements
datasets. Drawing on these, we then conduct a comprehensive empirical study
that covers 15 combinations of the state-of-the-art performance requirement
patterns, four types of aspiration space, three Pareto optimizers, and eight
real-world systems/environments, leading to 1,296 cases of investigation. We
found that (1) the realism of aspirations is the key factor that determines
whether they should be used to guide the tuning; (2) the given patterns and the
position of the realistic aspirations in the objective landscape are less
important for the choice, but they do matter to the extents of improvement; (3)
the available tuning budget can also influence the choice for unrealistic
aspirations but it is insignificant under realistic ones.
Related papers
- Offline Model-Based Optimization: Comprehensive Review [61.91350077539443]
offline optimization is a fundamental challenge in science and engineering, where the goal is to optimize black-box functions using only offline datasets.
Recent advances in model-based optimization have harnessed the generalization capabilities of deep neural networks to develop offline-specific surrogate and generative models.
Despite its growing impact in accelerating scientific discovery, the field lacks a comprehensive review.
arXiv Detail & Related papers (2025-03-21T16:35:02Z) - Towards Robust Interpretable Surrogates for Optimization [0.0]
An important factor in the practical implementation of optimization models is the acceptance by the intended users.
We present suitable models based on different variants to model uncertainty, and solution methods.
arXiv Detail & Related papers (2024-12-02T08:31:48Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Deep Pareto Reinforcement Learning for Multi-Objective Recommender Systems [60.91599969408029]
optimizing multiple objectives simultaneously is an important task for recommendation platforms.
Existing multi-objective recommender systems do not systematically consider such dynamic relationships.
arXiv Detail & Related papers (2024-07-04T02:19:49Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - Towards General and Efficient Online Tuning for Spark [55.30868031221838]
We present a general and efficient Spark tuning framework that can deal with the three issues simultaneously.
We have implemented this framework as an independent cloud service, and applied it to the data platform in Tencent.
arXiv Detail & Related papers (2023-09-05T02:16:45Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - The Perils of Learning Before Optimizing [16.97597806975415]
We show how prediction models can be learned end-to-end by differentiating through the optimization task.
We show that the performance gap between a two-stage and end-to-end approach is closely related to the emphprice of correlation concept in optimization.
arXiv Detail & Related papers (2021-06-18T20:43:47Z) - Multi-Objectivizing Software Configuration Tuning (for a single
performance concern) [7.285442358509729]
We propose a meta-objectivization model (MMO) that considers an auxiliary performance objective.
Our model is statistically more effective than state-of-the-art single-objective counterparts in overcoming local optima.
arXiv Detail & Related papers (2021-05-31T03:03:53Z) - Goal Seeking Quadratic Unconstrained Binary Optimization [0.5439020425819]
We present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
In this paper, we present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
arXiv Detail & Related papers (2021-03-24T03:03:13Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Descending through a Crowded Valley - Benchmarking Deep Learning
Optimizers [29.624308090226375]
In this work, we aim to replace these anecdotes, if not with a conclusive ranking, then at least with evidence-backed anecdotes.
To do so, we perform an extensive, standardized benchmark of fifteen particularly popular deep learnings.
Our open-sourced results are available as challenging and well-tuned baselines for more meaningful evaluations of novel optimization methods.
arXiv Detail & Related papers (2020-07-03T08:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.