Rectified Max-Value Entropy Search for Bayesian Optimization
- URL: http://arxiv.org/abs/2202.13597v1
- Date: Mon, 28 Feb 2022 08:11:02 GMT
- Title: Rectified Max-Value Entropy Search for Bayesian Optimization
- Authors: Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
- Abstract summary: We develop a rectified MES acquisition function based on the notion of mutual information.
As a result, RMES shows a consistent improvement over MES in several synthetic function benchmarks and real-world optimization problems.
- Score: 54.26984662139516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although the existing max-value entropy search (MES) is based on the widely
celebrated notion of mutual information, its empirical performance can suffer
due to two misconceptions whose implications on the exploration-exploitation
trade-off are investigated in this paper. These issues are essential in the
development of future acquisition functions and the improvement of the existing
ones as they encourage an accurate measure of the mutual information such as
the rectified MES (RMES) acquisition function we develop in this work. Unlike
the evaluation of MES, we derive a closed-form probability density for the
observation conditioned on the max-value and employ stochastic gradient ascent
with reparameterization to efficiently optimize RMES. As a result of a more
principled acquisition function, RMES shows a consistent improvement over MES
in several synthetic function benchmarks and real-world optimization problems.
Related papers
- Variational Inference of Parameters in Opinion Dynamics Models [9.51311391391997]
This work uses variational inference to estimate the parameters of an opinion dynamics ABM.
We transform the inference process into an optimization problem suitable for automatic differentiation.
Our approach estimates both macroscopic (bounded confidence intervals and backfire thresholds) and microscopic ($200$ categorical, agent-level roles) more accurately than simulation-based and MCMC methods.
arXiv Detail & Related papers (2024-03-08T14:45:18Z) - Variational Entropy Search for Adjusting Expected Improvement [3.04585143845864]
Expected Improvement (EI) is the most commonly utilized acquisition function in black-box functions.
We have developed the Variational Entropy Search (VES) methodology and the VES-Gamma algorithm, which adapts EI by incorporating principles from information-theoretic concepts.
arXiv Detail & Related papers (2024-02-17T17:37:53Z) - Unexpected Improvements to Expected Improvement for Bayesian
Optimization [23.207497480389208]
We propose LogEI, a new family of acquisition functions whose members either have identical or approximately equal optima as their canonical counterparts, but are substantially easier to optimize numerically.
Our empirical results show that members of the LogEI family of acquisition functions substantially improve on the optimization performance of their canonical counterparts and surprisingly, are on par with or exceed the performance of recent state-of-the-art acquisition functions.
arXiv Detail & Related papers (2023-10-31T17:59:56Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Counterfactual Explanations for Arbitrary Regression Models [8.633492031855655]
We present a new method for counterfactual explanations (CFEs) based on Bayesian optimisation.
Our method is a globally convergent search algorithm with support for arbitrary regression models and constraints like feature sparsity and actionable recourse.
arXiv Detail & Related papers (2021-06-29T09:53:53Z) - A maximum-entropy approach to off-policy evaluation in average-reward
MDPs [54.967872716145656]
This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs)
We provide the first finite-sample OPE error bound, extending existing results beyond the episodic and discounted cases.
We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning.
arXiv Detail & Related papers (2020-06-17T18:13:37Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Finding Optimal Points for Expensive Functions Using Adaptive RBF-Based
Surrogate Model Via Uncertainty Quantification [11.486221800371919]
We propose a novel global optimization framework using adaptive Radial Basis Functions (RBF) based surrogate model via uncertainty quantification.
It first employs an RBF-based Bayesian surrogate model to approximate the true function, where the parameters of the RBFs can be adaptively estimated and updated each time a new point is explored.
It then utilizes a model-guided selection criterion to identify a new point from a candidate set for function evaluation.
arXiv Detail & Related papers (2020-01-19T16:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.