Optimizing Through Change: Bounds and Recommendations for Time-Varying Bayesian Optimization Algorithms
- URL: http://arxiv.org/abs/2501.18963v1
- Date: Fri, 31 Jan 2025 08:49:33 GMT
- Title: Optimizing Through Change: Bounds and Recommendations for Time-Varying Bayesian Optimization Algorithms
- Authors: Anthony Bardou, Patrick Thiran,
- Abstract summary: We propose the first analysis that bounds the cumulative regret of TVBO algorithms under mild and realistic assumptions only.
Based on this analysis, we formulate recommendations for TVBO algorithms and show how an algorithm (BOLT) that follows them performs better than the state-of-the-art of TVBO.
- Score: 5.224009487402768
- License:
- Abstract: Time-Varying Bayesian Optimization (TVBO) is the go-to framework for optimizing a time-varying, expensive, noisy black-box function. However, most of the solutions proposed so far either rely on unrealistic assumptions on the nature of the objective function or do not offer any theoretical guarantees. We propose the first analysis that asymptotically bounds the cumulative regret of TVBO algorithms under mild and realistic assumptions only. In particular, we provide an algorithm-independent lower regret bound and an upper regret bound that holds for a large class of TVBO algorithms. Based on this analysis, we formulate recommendations for TVBO algorithms and show how an algorithm (BOLT) that follows them performs better than the state-of-the-art of TVBO through experiments on synthetic and real-world problems.
Related papers
- Principled Preferential Bayesian Optimization [22.269732173306192]
We study the problem of preferential Bayesian optimization (BO)
We aim to optimize a black-box function with only preference feedback over a pair of candidate solutions.
An optimistic algorithm with an efficient computational method is then developed to solve the problem.
arXiv Detail & Related papers (2024-02-08T02:57:47Z) - LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions [0.0]
We propose the LABCAT algorithm, which extends trust-region-based BO.
We show that the algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
arXiv Detail & Related papers (2023-11-19T13:56:24Z) - Parallel Bayesian Optimization Using Satisficing Thompson Sampling for
Time-Sensitive Black-Box Optimization [0.0]
We propose satisficing Thompson sampling-based parallel BO approaches, including synchronous and asynchronous versions.
We shift the target from an optimal solution to a satisficing solution that is easier to learn.
The effectiveness of the proposed methods is demonstrated on a fast-charging design problem of Lithium-ion batteries.
arXiv Detail & Related papers (2023-10-19T07:03:51Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Batch Bayesian optimisation via density-ratio estimation with guarantees [26.052368583196426]
We present a theoretical analysis of BORE's regret and an extension of the algorithm with improved uncertainty estimates.
We also show that BORE can be naturally extended to a batch optimisation setting by recasting the problem as approximate Bayesian inference.
arXiv Detail & Related papers (2022-09-22T00:42:18Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.