Co-Learning Bayesian Optimization
- URL: http://arxiv.org/abs/2501.13332v1
- Date: Thu, 23 Jan 2025 02:25:10 GMT
- Title: Co-Learning Bayesian Optimization
- Authors: Zhendong Guo, Yew-Soon Ong, Tiantian He, Haitao Liu,
- Abstract summary: We propose a novel BO algorithm labeled as co-learning BO (CLBO), which exploits both model diversity and agreement on unlabeled information to improve the overall surrogate accuracy with limited samples.
Through tests on five numerical toy problems and three engineering benchmarks, the effectiveness of proposed CLBO has been well demonstrated.
- Score: 28.394424693363103
- License:
- Abstract: Bayesian optimization (BO) is well known to be sample-efficient for solving black-box problems. However, the BO algorithms can sometimes get stuck in suboptimal solutions even with plenty of samples. Intrinsically, such suboptimal problem of BO can attribute to the poor surrogate accuracy of the trained Gaussian process (GP), particularly that in the regions where the optimal solutions locate. Hence, we propose to build multiple GP models instead of a single GP surrogate to complement each other and thus resolving the suboptimal problem of BO. Nevertheless, according to the bias-variance tradeoff equation, the individual prediction errors can increase when increasing the diversity of models, which may lead to even worse overall surrogate accuracy. On the other hand, based on the theory of Rademacher complexity, it has been proved that exploiting the agreement of models on unlabeled information can help to reduce the complexity of the hypothesis space, and therefore achieving the required surrogate accuracy with fewer samples. Such value of model agreement has been extensively demonstrated for co-training style algorithms to boost model accuracy with a small portion of samples. Inspired by the above, we propose a novel BO algorithm labeled as co-learning BO (CLBO), which exploits both model diversity and agreement on unlabeled information to improve the overall surrogate accuracy with limited samples, and therefore achieving more efficient global optimization. Through tests on five numerical toy problems and three engineering benchmarks, the effectiveness of proposed CLBO has been well demonstrated.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - A Performance Analysis of Basin Hopping Compared to Established
Metaheuristics for Global Optimization [2.209921757303168]
We perform numerical experiments using the IOH profiler environment with the BBOB test function set and two difficult real-world problems.
Results show that Basin Hopping can be considered a good candidate for global numerical optimization problems along with the more established metaheuristics.
arXiv Detail & Related papers (2024-03-09T11:02:47Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Parallel Bayesian Optimization Using Satisficing Thompson Sampling for
Time-Sensitive Black-Box Optimization [0.0]
We propose satisficing Thompson sampling-based parallel BO approaches, including synchronous and asynchronous versions.
We shift the target from an optimal solution to a satisficing solution that is easier to learn.
The effectiveness of the proposed methods is demonstrated on a fast-charging design problem of Lithium-ion batteries.
arXiv Detail & Related papers (2023-10-19T07:03:51Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - No-Regret Constrained Bayesian Optimization of Noisy and Expensive
Hybrid Models using Differentiable Quantile Function Approximations [0.0]
Constrained Upper Quantile Bound (CUQB) is a conceptually simple, deterministic approach that avoids constraint approximations.
We show that CUQB significantly outperforms traditional Bayesian optimization in both constrained and unconstrained cases.
arXiv Detail & Related papers (2023-05-05T19:57:36Z) - On the role of Model Uncertainties in Bayesian Optimization [8.659630453400593]
We study the relationship between the BO performance (regret) and uncertainty calibration for popular surrogate models.
Our results show a positive association between calibration error and regret, but interestingly, this association disappears when we control for the type of model in the analysis.
arXiv Detail & Related papers (2023-01-14T21:45:17Z) - A model aggregation approach for high-dimensional large-scale
optimization [2.1104930506758275]
We propose a model aggregation method in the Bayesian optimization (MamBO) algorithm for efficiently solving high-dimensional large-scale optimization problems.
MamBO uses a combination of subsampling and subspace embeddings to collectively address high dimensionality and large-scale issues.
Our proposed model aggregation method reduces these lower-dimensional surrogate model risks and improves the robustness of the BO algorithm.
arXiv Detail & Related papers (2022-05-16T08:58:42Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.