Hybrid Parameter Search and Dynamic Model Selection for Mixed-Variable
Bayesian Optimization
- URL: http://arxiv.org/abs/2206.01409v4
- Date: Fri, 19 Jan 2024 00:23:28 GMT
- Title: Hybrid Parameter Search and Dynamic Model Selection for Mixed-Variable
Bayesian Optimization
- Authors: Hengrui Luo, Younghyun Cho, James W. Demmel, Xiaoye S. Li, Yang Liu
- Abstract summary: We propose a new type of hybrid model for Bayesian optimization (BO) adept at managing mixed variables.
Our proposed new hybrid models (named hybridM) merge the Monte Carlo Tree Search structure (MCTS) for categorical variables with Gaussian Processes (GP) for continuous ones.
Our innovations, including dynamic online kernel selection in the surrogate modeling phase, position our hybrid models as an advancement in mixed-variable surrogate models.
- Score: 6.204805504959941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a new type of hybrid model for Bayesian optimization (BO)
adept at managing mixed variables, encompassing both quantitative (continuous
and integer) and qualitative (categorical) types. Our proposed new hybrid
models (named hybridM) merge the Monte Carlo Tree Search structure (MCTS) for
categorical variables with Gaussian Processes (GP) for continuous ones. hybridM
leverages the upper confidence bound tree search (UCTS) for MCTS strategy,
showcasing the tree architecture's integration into Bayesian optimization. Our
innovations, including dynamic online kernel selection in the surrogate
modeling phase and a unique UCTS search strategy, position our hybrid models as
an advancement in mixed-variable surrogate models. Numerical experiments
underscore the superiority of hybrid models, highlighting their potential in
Bayesian optimization.
Related papers
- Automatically Learning Hybrid Digital Twins of Dynamical Systems [56.69628749813084]
Digital Twins (DTs) simulate the states and temporal dynamics of real-world systems.
DTs often struggle to generalize to unseen conditions in data-scarce settings.
In this paper, we propose an evolutionary algorithm ($textbfHDTwinGen$) to autonomously propose, evaluate, and optimize HDTwins.
arXiv Detail & Related papers (2024-10-31T07:28:22Z) - Automated Model Selection for Generalized Linear Models [0.0]
We show how mixed-integer conic optimization can be used to combine feature subset selection with holistic generalized linear models.
We propose a novel pairwise correlation constraint that combines the sign coherence constraint with ideas from classical statistical models.
arXiv Detail & Related papers (2024-04-25T12:16:58Z) - Hybrid State Space-based Learning for Sequential Data Prediction with
Joint Optimization [0.0]
We introduce a hybrid model that mitigates, via a joint mechanism, the need for domain-specific feature engineering issues of conventional nonlinear prediction models.
We achieve this by introducing novel state space representations for the base models, which are then combined to provide a full state space representation of the hybrid or the ensemble.
Due to such novel combination and joint optimization, we demonstrate significant improvements in widely publicized real life competition datasets.
arXiv Detail & Related papers (2023-09-19T12:00:28Z) - AI-Empowered Hybrid MIMO Beamforming [85.48860461696417]
Hybrid multiple-input multiple-output (MIMO) systems implement part of their beamforming in analog and part in digital.
Recent years have witnessed a growing interest in using data-aided artificial intelligence (AI) tools for hybrid beamforming design.
This article reviews candidate strategies to leverage data to improve real-time hybrid beamforming design.
arXiv Detail & Related papers (2023-03-03T06:04:20Z) - Applying Autonomous Hybrid Agent-based Computing to Difficult
Optimization Problems [56.821213236215634]
This paper focuses on a proposed hybrid version of the EMAS.
It covers selection and introduction of a number of hybrid operators and defining rules for starting the hybrid steps of the main algorithm.
Those hybrid steps leverage existing, well-known and proven to be efficient metaheuristics, and integrate their results into the main algorithm.
arXiv Detail & Related papers (2022-10-24T13:28:35Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - A hybrid ensemble method with negative correlation learning for
regression [2.8484009470171943]
This study automatically selects and weights sub-models from a heterogeneous model pool.
It solves an optimization problem using an interior-point filtering linear-search algorithm.
The value of this study lies in its ease of use and effectiveness, allowing the hybrid ensemble to embrace diversity and accuracy.
arXiv Detail & Related papers (2021-04-06T06:45:14Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Model Agnostic Combination for Ensemble Learning [0.0]
We present a novel ensembling technique coined MAC that is designed to find the optimal function for combining models.
Being agnostic to the number of sub-models enables addition and replacement of sub-models to the combination even after deployment.
arXiv Detail & Related papers (2020-06-16T09:44:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.