Bayesian Optimisation for Mixed-Variable Inputs using Value Proposals
- URL: http://arxiv.org/abs/2202.04832v1
- Date: Thu, 10 Feb 2022 04:42:48 GMT
- Title: Bayesian Optimisation for Mixed-Variable Inputs using Value Proposals
- Authors: Yan Zuo, Amir Dezfouli, Iadine Chades, David Alexander, Benjamin Ward
Muir
- Abstract summary: optimisation problems defined over both categorical and continuous variables.
We adopt a holistic view and aim to consolidate optimisation of the categorical and continuous sub-spaces.
We show that this unified approach significantly outperforms existing mixed-variable optimisation approaches.
- Score: 10.40799693791025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many real-world optimisation problems are defined over both categorical and
continuous variables, yet efficient optimisation methods such asBayesian
Optimisation (BO) are not designed tohandle such mixed-variable search spaces.
Re-cent approaches to this problem cast the selection of the categorical
variables as a bandit problem, operating independently alongside a BO component
which optimises the continuous variables. In this paper, we adopt a holistic
view and aim to consolidate optimisation of the categorical and continuous
sub-spaces under a single acquisition metric. We derive candidates from the
ExpectedImprovement criterion, which we call value proposals, and use these
proposals to make selections on both the categorical and continuous components
of the input. We show that this unified approach significantly outperforms
existing mixed-variable optimisation approaches across several mixed-variable
black-box optimisation tasks.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - CatCMA : Stochastic Optimization for Mixed-Category Problems [9.13074910982872]
Black-box optimization problems often require simultaneously optimizing different types of variables, such as continuous, integer, and categorical variables.
Several Bayesian optimization methods can deal with mixed-category black-box optimization (MC-BBO), but they suffer from a lack of scalability to high-dimensional problems and internal computational cost.
This paper proposes CatCMA, a new optimization method for MC-BBO problems.
arXiv Detail & Related papers (2024-05-16T10:11:18Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - SOBER: Highly Parallel Bayesian Optimization and Bayesian Quadrature
over Discrete and Mixed Spaces [6.573393706476156]
We present a novel diversified global optimisation quadrature with arbitrary kernels over discrete and mixed spaces.
Batch quadrature can efficiently solve both tasks by balancing the merits of exploitative Bayesian quadrature.
We show that SOBER outperforms competitive baselinesefficient batch and scalable real-world tasks.
arXiv Detail & Related papers (2023-01-27T16:36:33Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Think Global and Act Local: Bayesian Optimisation over High-Dimensional
Categorical and Mixed Search Spaces [26.08218231365666]
High-dimensional black-box optimisation remains an important yet notoriously challenging problem.
We propose a novel solution -- we combine local optimisation with a tailored kernel design, effectively handling high-dimensional categorical and mixed search spaces.
arXiv Detail & Related papers (2021-02-14T16:18:36Z) - Adaptive Local Bayesian Optimization Over Multiple Discrete Variables [9.860437640748113]
This paper describes the approach of team KAIST OSI in a step-wise manner, which outperforms the baseline algorithms by up to +20.39%.
In a similar vein, we combine the methodology of Bayesian and multi-armed bandit,(MAB) approach to select the values with the consideration of the variable types.
Empirical evaluations demonstrate that our method outperforms the existing methods across different tasks.
arXiv Detail & Related papers (2020-12-07T07:51:23Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z) - Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees [28.46586066038317]
We provide the first efficient implementation of general multi-stepahead look Bayesian optimization.
Instead of solving these problems in a nested way, we equivalently optimize all decision variables in the full tree jointly.
We demonstrate that multistep expected improvement is tractable and exhibits performance superior to existing methods on a wide range of benchmarks.
arXiv Detail & Related papers (2020-06-29T02:17:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.