Dynamic Multi-objective Ensemble of Acquisition Functions in Batch
Bayesian Optimization
- URL: http://arxiv.org/abs/2206.11118v1
- Date: Wed, 22 Jun 2022 14:09:18 GMT
- Title: Dynamic Multi-objective Ensemble of Acquisition Functions in Batch
Bayesian Optimization
- Authors: Jixiang Chen and Fu Luo and Zhenkun Wang
- Abstract summary: The acquisition function plays a crucial role in the optimization process.
Three acquisition functions are dynamically selected from a set based on their current and historical performance.
Using an evolutionary multi-objective algorithm to optimize such a MOP, a set of non-dominated solutions can be obtained.
- Score: 1.1602089225841632
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bayesian optimization (BO) is a typical approach to solve expensive
optimization problems. In each iteration of BO, a Gaussian process(GP) model is
trained using the previously evaluated solutions; then next candidate solutions
for expensive evaluation are recommended by maximizing a cheaply-evaluated
acquisition function on the trained surrogate model. The acquisition function
plays a crucial role in the optimization process. However, each acquisition
function has its own strengths and weaknesses, and no single acquisition
function can consistently outperform the others on all kinds of problems. To
better leverage the advantages of different acquisition functions, we propose a
new method for batch BO. In each iteration, three acquisition functions are
dynamically selected from a set based on their current and historical
performance to form a multi-objective optimization problem (MOP). Using an
evolutionary multi-objective algorithm to optimize such a MOP, a set of
non-dominated solutions can be obtained. To select batch candidate solutions,
we rank these non-dominated solutions into several layers according to their
relative performance on the three acquisition functions. The empirical results
show that the proposed method is competitive with the state-of-the-art methods
on different problems.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Multi-Objective Bayesian Optimization with Active Preference Learning [18.066263838953223]
We propose a Bayesian optimization (BO) approach to identifying the most preferred solution in a multi-objective optimization (MOO) problem.
To minimize the interaction cost with the decision maker (DM), we also propose an active learning strategy for the preference estimation.
arXiv Detail & Related papers (2023-11-22T15:24:36Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - A Bayesian Optimization Framework for Finding Local Optima in Expensive
Multi-Modal Functions [18.570591025615453]
This paper develops a multimodal BO framework to find a set of local/global solutions for expensive-to-evaluate multimodal objective functions.
We analytically derive the joint distribution of the objective function and its first-order derivatives.
We introduce variants of the well-known BO acquisition functions to the multimodal setting and demonstrate the performance of the proposed framework.
arXiv Detail & Related papers (2022-10-13T00:10:13Z) - Joint Entropy Search for Multi-objective Bayesian Optimization [0.0]
We propose a novel information-theoretic acquisition function for BO called Joint Entropy Search.
We showcase the effectiveness of this new approach on a range of synthetic and real-world problems in terms of the hypervolume and its weighted variants.
arXiv Detail & Related papers (2022-10-06T13:19:08Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - Uncertainty-Aware Search Framework for Multi-Objective Bayesian
Optimization [40.40632890861706]
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation.
arXiv Detail & Related papers (2022-04-12T16:50:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.