Using Affine Combinations of BBOB Problems for Performance Assessment
- URL: http://arxiv.org/abs/2303.04573v1
- Date: Wed, 8 Mar 2023 13:37:55 GMT
- Title: Using Affine Combinations of BBOB Problems for Performance Assessment
- Authors: Diederick Vermetten, Furong Ye, Carola Doerr
- Abstract summary: We show how affine function combinations can be used to analyze the behavior of optimization algorithms.
In particular, we highlight that by varying the weighting between the combined problems, we can gain insights into the effects of added global structure.
- Score: 0.9281671380673306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benchmarking plays a major role in the development and analysis of
optimization algorithms. As such, the way in which the used benchmark problems
are defined significantly affects the insights that can be gained from any
given benchmark study. One way to easily extend the range of available
benchmark functions is through affine combinations between pairs of functions.
From the perspective of landscape analysis, these function combinations
smoothly transition between the two base functions.
In this work, we show how these affine function combinations can be used to
analyze the behavior of optimization algorithms. In particular, we highlight
that by varying the weighting between the combined problems, we can gain
insights into the effects of added global structure on the performance of
optimization algorithms. By analyzing performance trajectories on more function
combinations, we also show that aspects such as the scaling of objective
functions and placement of the optimum can greatly impact how these results are
interpreted.
Related papers
- FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch [21.41322548859776]
We show how FunBO can be used to learn new acquisition functions written in computer code.
We show how FunBO identifies AFs that generalize well in and out of the training distribution of functions.
arXiv Detail & Related papers (2024-06-07T10:49:59Z) - Functional Bilevel Optimization for Machine Learning [36.081761044296904]
We introduce a new functional point of view on bilevel optimization problems for machine learning, where the inner objective is minimized over a function space.
We propose scalable and efficient algorithms for the functional bilevel optimization problem.
arXiv Detail & Related papers (2024-03-29T15:22:03Z) - Explainable Benchmarking for Iterative Optimization Heuristics [0.8192907805418583]
We introduce the IOH-Xplainer software framework, for analyzing and understanding the performance of various optimization algorithms.
We examine the impact of different algorithmic components and configurations, offering insights into their performance across diverse scenarios.
arXiv Detail & Related papers (2024-01-31T14:02:26Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Runtime Analysis of the (1+1) EA on Weighted Sums of Transformed Linear
Functions [13.264683014487376]
We study the class of objective functions that are weighted sums of two transformed linear functions.
Our results show that the (1+1) EA, with a mutation rate depending on the number of overlapping bits of the functions, obtains an optimal solution for these functions in expected time O(n log n)
arXiv Detail & Related papers (2022-08-11T07:05:15Z) - On the development of a Bayesian optimisation framework for complex
unknown systems [11.066706766632578]
This paper studies and compares common Bayesian optimisation algorithms empirically on a range of synthetic test functions.
It investigates the choice of acquisition function and number of training samples, exact calculation of acquisition functions and Monte Carlo based approaches.
arXiv Detail & Related papers (2022-07-19T09:50:34Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.