Multi-Objectivizing Software Configuration Tuning (for a single
performance concern)
- URL: http://arxiv.org/abs/2106.01331v1
- Date: Mon, 31 May 2021 03:03:53 GMT
- Title: Multi-Objectivizing Software Configuration Tuning (for a single
performance concern)
- Authors: Tao Chen and Miqing Li
- Abstract summary: We propose a meta-objectivization model (MMO) that considers an auxiliary performance objective.
Our model is statistically more effective than state-of-the-art single-objective counterparts in overcoming local optima.
- Score: 7.285442358509729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatically tuning software configuration for optimizing a single
performance attribute (e.g., minimizing latency) is not trivial, due to the
nature of the configuration systems (e.g., complex landscape and expensive
measurement). To deal with the problem, existing work has been focusing on
developing various effective optimizers. However, a prominent issue that all
these optimizers need to take care of is how to avoid the search being trapped
in local optima -- a hard nut to crack for software configuration tuning due to
its rugged and sparse landscape, and neighboring configurations tending to
behave very differently. Overcoming such in an expensive measurement setting is
even more challenging. In this paper, we take a different perspective to tackle
this issue. Instead of focusing on improving the optimizer, we work on the
level of optimization model. We do this by proposing a meta
multi-objectivization model (MMO) that considers an auxiliary performance
objective (e.g., throughput in addition to latency). What makes this model
unique is that we do not optimize the auxiliary performance objective, but
rather use it to make similarly-performing while different configurations less
comparable (i.e. Pareto nondominated to each other), thus preventing the search
from being trapped in local optima.
Experiments on eight real-world software systems/environments with diverse
performance attributes reveal that our MMO model is statistically more
effective than state-of-the-art single-objective counterparts in overcoming
local optima (up to 42% gain), while using as low as 24% of their measurements
to achieve the same (or better) performance result.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System [75.25394449773052]
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving.
Yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods.
We present Optima, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness.
arXiv Detail & Related papers (2024-10-10T17:00:06Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Adapting Multi-objectivized Software Configuration Tuning [6.42475226408675]
We propose a weight adaptation method, dubbed AdMMO, for tuning software configuration for better performance.
Our key idea is to adaptively adjust the weight at the right time during tuning, such that a good proportion of the nondominated configurations can be maintained.
arXiv Detail & Related papers (2024-04-06T22:08:09Z) - Controllable Prompt Tuning For Balancing Group Distributional Robustness [53.336515056479705]
We introduce an optimization scheme to achieve good performance across groups and find a good solution for all without severely sacrificing performance on any of them.
We propose Controllable Prompt Tuning (CPT), which couples our approach with prompt-tuning techniques.
On spurious correlation benchmarks, our procedures achieve state-of-the-art results across both transformer and non-transformer architectures, as well as unimodal and multimodal data.
arXiv Detail & Related papers (2024-03-05T06:23:55Z) - Judging Adam: Studying the Performance of Optimization Methods on ML4SE
Tasks [2.8961929092154697]
We test the performance of variouss on deep learning models for source code.
We find that the choice of anahead can have a significant impact on the model quality.
We suggest that the ML4SE community should consider using RAdam instead Adam as the default for code-related deep learning tasks.
arXiv Detail & Related papers (2023-03-06T22:49:20Z) - VeLO: Training Versatile Learned Optimizers by Scaling Up [67.90237498659397]
We leverage the same scaling approach behind the success of deep learning to learn versatiles.
We train an ingest for deep learning which is itself a small neural network that ingests and outputs parameter updates.
We open source our learned, meta-training code, the associated train test data, and an extensive benchmark suite with baselines at velo-code.io.
arXiv Detail & Related papers (2022-11-17T18:39:07Z) - MMO: Meta Multi-Objectivization for Software Configuration Tuning [5.716481441755875]
We propose a meta multi-objectivization (MMO) model that considers an auxiliary performance objective.
We show how to effectively use the MMO model without worrying about its weight.
arXiv Detail & Related papers (2021-12-14T11:21:24Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.