Bayesian Variational Optimization for Combinatorial Spaces
- URL: http://arxiv.org/abs/2011.02004v1
- Date: Tue, 3 Nov 2020 20:56:13 GMT
- Title: Bayesian Variational Optimization for Combinatorial Spaces
- Authors: Tony C. Wu, Daniel Flam-Shepherd, Al\'an Aspuru-Guzik
- Abstract summary: Broad applications include the study of molecules, proteins, DNA, device structures and quantum circuit designs.
A on optimization over categorical spaces is needed to find optimal or pareto-optimal solutions.
We introduce a variational Bayesian optimization method that combines variational optimization and continuous relaxations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on Bayesian Optimization in combinatorial spaces. In many
applications in the natural science. Broad applications include the study of
molecules, proteins, DNA, device structures and quantum circuit designs, a on
optimization over combinatorial categorical spaces is needed to find optimal or
pareto-optimal solutions. However, only a limited amount of methods have been
proposed to tackle this problem. Many of them depend on employing Gaussian
Process for combinatorial Bayesian Optimizations. Gaussian Processes suffer
from scalability issues for large data sizes as their scaling is cubic with
respect to the number of data points. This is often impractical for optimizing
large search spaces. Here, we introduce a variational Bayesian optimization
method that combines variational optimization and continuous relaxations to the
optimization of the acquisition function for Bayesian optimization. Critically,
this method allows for gradient-based optimization and has the capability of
optimizing problems with large data size and data dimensions. We have shown the
performance of our method is comparable to state-of-the-art methods while
maintaining its scalability advantages. We also applied our method in molecular
optimization.
Related papers
- Harmonic Oscillator based Particle Swarm Optimization [0.0]
In general, a set of parameters ( parameters space) is tuned to find the lowest value of a function depending on these parameters (cost function)
In most cases the parameter space is too big to be completely searched and the most efficient techniques combine elements (randomness included in the starting setting and decision making during the optimization process) with well designed deterministic process.
Here we present a method that integrates Particle Optimization (PSO), a highly effective and successful algorithm inspired by a flock of birds searching for food, with the principles of Harmonics.
This physics-based approach introduces the concept of collective energy, enabling a smoother and
arXiv Detail & Related papers (2024-10-10T15:35:45Z) - A survey and benchmark of high-dimensional Bayesian optimization of discrete sequences [12.248793682283964]
optimizing discrete black-box functions is key in several domains, e.g. protein engineering and drug design.
We develop a unified framework to test a vast array of high-dimensional Bayesian optimization methods and a collection of standardized black-box functions.
These two components of the benchmark are each supported by flexible, scalable, and easily extendable software libraries.
arXiv Detail & Related papers (2024-06-07T08:39:40Z) - Extrinsic Bayesian Optimizations on Manifolds [1.3477333339913569]
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on Euclid manifold.
Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensionalean space.
This leads to efficient and scalable algorithms for optimization over complex manifold.
arXiv Detail & Related papers (2022-12-21T06:10:12Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Combinatorial Bayesian Optimization with Random Mapping Functions to
Convex Polytopes [43.19936635161588]
We present a method for Bayesian optimization in a space which can operate well in a large space.
Our algorithm shows satisfactory performance compared to existing methods.
arXiv Detail & Related papers (2020-11-26T02:22:41Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.