Joint Entropy Search For Maximally-Informed Bayesian Optimization
- URL: http://arxiv.org/abs/2206.04771v1
- Date: Thu, 9 Jun 2022 21:19:07 GMT
- Title: Joint Entropy Search For Maximally-Informed Bayesian Optimization
- Authors: Carl Hvarfner and Frank Hutter and Luigi Nardi
- Abstract summary: We propose a novel information-theoretic acquisition function that considers the entropy over the joint optimal probability density over both input and output space.
Joint Entropy Search (JES) shows superior decision-making, and yields state-of-the-art performance for information-theoretic approaches across a wide suite of tasks.
- Score: 38.10887297038352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information-theoretic Bayesian optimization techniques have become popular
for optimizing expensive-to-evaluate black-box functions due to their
non-myopic qualities. Entropy Search and Predictive Entropy Search both
consider the entropy over the optimum in the input space, while the recent
Max-value Entropy Search considers the entropy over the optimal value in the
output space. We propose Joint Entropy Search (JES), a novel
information-theoretic acquisition function that considers an entirely new
quantity, namely the entropy over the joint optimal probability density over
both input and output space. To incorporate this information, we consider the
reduction in entropy from conditioning on fantasized optimal input/output
pairs. The resulting approach primarily relies on standard GP machinery and
removes complex approximations typically associated with information-theoretic
methods. With minimal computational overhead, JES shows superior
decision-making, and yields state-of-the-art performance for
information-theoretic approaches across a wide suite of tasks. As a
light-weight approach with superior results, JES provides a new go-to
acquisition function for Bayesian optimization.
Related papers
- Asymptotically Optimal Change Detection for Unnormalized Pre- and Post-Change Distributions [65.38208224389027]
This paper addresses the problem of detecting changes when only unnormalized pre- and post-change distributions are accessible.
Our approach is based on the estimation of the Cumulative Sum statistics, which is known to produce optimal performance.
arXiv Detail & Related papers (2024-10-18T17:13:29Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Halfway Escape Optimization: A Quantum-Inspired Solution for General Optimization Problems [6.3816899727206895]
This paper first proposes the Halfway Escape Optimization (HEO) algorithm, a quantum-inspired metaheuristic designed to address general optimization problems.
After the introduction to the HEO mechansims, the study presents a comprehensive evaluation of HEO's performance against extensively-used optimization algorithms.
The test of HEO in Pressure Vessel Design and Tubular Column Design infers its feasibility and potential in real-time applications.
arXiv Detail & Related papers (2024-05-05T08:43:07Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Bayesian Optimization with Informative Covariance [13.113313427848828]
We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space.
We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
arXiv Detail & Related papers (2022-08-04T15:05:11Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - Directed particle swarm optimization with Gaussian-process-based
function forecasting [15.733136147164032]
Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths.
We show that our algorithm attains desirable properties for exploratory and exploitative behavior.
arXiv Detail & Related papers (2021-02-08T13:02:57Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.