Kernel Learning for Sample Constrained Black-Box Optimization
- URL: http://arxiv.org/abs/2507.20533v1
- Date: Mon, 28 Jul 2025 05:32:11 GMT
- Title: Kernel Learning for Sample Constrained Black-Box Optimization
- Authors: Rajalaxmi Rajagopalan, Yu-Lin Wei, Romit Roy Choudhury,
- Abstract summary: We propose a new method to learn the kernel of a Gaussian Process.<n>Our idea is to create a continuous kernel space in the latent space of a variational autoencoder, and run an auxiliary optimization to identify the best kernel.<n>Results hold not only across synthetic benchmark functions but also in real applications.
- Score: 7.054093620465401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Black box optimization (BBO) focuses on optimizing unknown functions in high-dimensional spaces. In many applications, sampling the unknown function is expensive, imposing a tight sample budget. Ongoing work is making progress on reducing the sample budget by learning the shape/structure of the function, known as kernel learning. We propose a new method to learn the kernel of a Gaussian Process. Our idea is to create a continuous kernel space in the latent space of a variational autoencoder, and run an auxiliary optimization to identify the best kernel. Results show that the proposed method, Kernel Optimized Blackbox Optimization (KOBO), outperforms state of the art by estimating the optimal at considerably lower sample budgets. Results hold not only across synthetic benchmark functions but also in real applications. We show that a hearing aid may be personalized with fewer audio queries to the user, or a generative model could converge to desirable images from limited user ratings.
Related papers
- Efficient optimization of expensive black-box simulators via marginal means, with application to neutrino detector design [1.5749416770494706]
We propose a new Black-box Optimization via Marginal Means (BOMM) approach.<n>BOMM uses a new estimator of a global $mathbfx*$ that can be efficiently inferred with limited runs in high dimensions.<n>We show that BOMM is consistent for optimization, but also has an optimization rate that tempers the ''curse-of-dimensionality'' faced by existing methods.
arXiv Detail & Related papers (2025-08-03T16:44:05Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Conditional mean embeddings and optimal feature selection via positive
definite kernels [0.0]
We consider operator theoretic approaches to Conditional Conditional embeddings (CME)
Our results combine a spectral analysis-based optimization scheme with the use of kernels, processes, and constructive learning algorithms.
arXiv Detail & Related papers (2023-05-14T08:29:15Z) - Neural-BO: A Black-box Optimization Algorithm using Deep Neural Networks [12.218039144209017]
We propose a novel black-box optimization algorithm where the black-box function is modeled using a neural network.
Our algorithm does not need a Bayesian neural network to estimate predictive uncertainty and is therefore computationally favorable.
arXiv Detail & Related papers (2023-03-03T02:53:56Z) - Target-based Surrogates for Stochastic Optimization [26.35752393302125]
We consider minimizing functions for which it is expensive to compute the (possibly) gradient.
Such functions are prevalent in computation reinforcement learning, imitation learning and adversarial training.
Our framework allows the use of standard optimization algorithms to construct surrogates which can be minimized efficiently.
arXiv Detail & Related papers (2023-02-06T08:08:34Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Non-smooth Bayesian Optimization in Tuning Problems [5.768843113172494]
Building surrogate models is one common approach when we attempt to learn unknown black-box functions.
We propose a novel additive Gaussian process model called clustered Gaussian process (cGP), where the additive components are induced by clustering.
In the examples we studied, the performance can be improved by as much as 90% among repetitive experiments.
arXiv Detail & Related papers (2021-09-15T20:22:09Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning [95.85269649177336]
ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
arXiv Detail & Related papers (2020-06-11T06:50:35Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.