Asymptotic Performance of Time-Varying Bayesian Optimization
- URL: http://arxiv.org/abs/2505.13012v1
- Date: Mon, 19 May 2025 11:55:02 GMT
- Title: Asymptotic Performance of Time-Varying Bayesian Optimization
- Authors: Anthony Bardou, Patrick Thiran,
- Abstract summary: We show that it is possible for the instantaneous regret of a TVBO algorithm to vanish, and if so, when?<n>We derive sufficient conditions for a TVBO algorithm to have the no-regret property.<n>Our analysis covers all major classes of stationary kernel functions.
- Score: 5.224009487402768
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Time-Varying Bayesian Optimization (TVBO) is the go-to framework for optimizing a time-varying black-box objective function that may be noisy and expensive to evaluate. Is it possible for the instantaneous regret of a TVBO algorithm to vanish asymptotically, and if so, when? We answer this question of great theoretical importance by providing algorithm-independent lower regret bounds and upper regret bounds for TVBO algorithms, from which we derive sufficient conditions for a TVBO algorithm to have the no-regret property. Our analysis covers all major classes of stationary kernel functions.
Related papers
- Optimizing Through Change: Bounds and Recommendations for Time-Varying Bayesian Optimization Algorithms [5.224009487402768]
We propose the first analysis that bounds the cumulative regret of TVBO algorithms under mild and realistic assumptions only.<n>Based on this analysis, we formulate recommendations for TVBO algorithms and show how an algorithm (BOLT) that follows them performs better than the state-of-the-art of TVBO.
arXiv Detail & Related papers (2025-01-31T08:49:33Z) - This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian Optimization [4.6481096949408105]
We build a DBO algorithm able to remove irrelevant observations from its dataset on the fly.
We establish the superiority of W-DBO, which outperforms state-of-the-art methods by a comfortable margin.
arXiv Detail & Related papers (2024-05-23T13:22:59Z) - Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling [73.5602474095954]
We study the non-asymptotic performance of approximation schemes with delayed updates under Markovian sampling.
Our theoretical findings shed light on the finite-time effects of delays for a broad class of algorithms.
arXiv Detail & Related papers (2024-02-19T03:08:02Z) - Accelerated First-Order Optimization under Nonlinear Constraints [61.98523595657983]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.<n>An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Batch Bayesian optimisation via density-ratio estimation with guarantees [26.052368583196426]
We present a theoretical analysis of BORE's regret and an extension of the algorithm with improved uncertainty estimates.
We also show that BORE can be naturally extended to a batch optimisation setting by recasting the problem as approximate Bayesian inference.
arXiv Detail & Related papers (2022-09-22T00:42:18Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - An Index-based Deterministic Asymptotically Optimal Algorithm for
Constrained Multi-armed Bandit Problems [0.0]
For the model of constrained multi-armed bandit, we show that there exists an index-based deterministically optimal algorithm.
We provide a finite-time bound to the probability of the optimality given as 1-O(|A|Te-T) where T is the horizon size and A is the set of the arms in the bandit.
arXiv Detail & Related papers (2020-07-29T01:54:22Z) - Time-varying Gaussian Process Bandit Optimization with Non-constant
Evaluation Time [93.6788993843846]
We propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time.
Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem.
arXiv Detail & Related papers (2020-03-10T13:28:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.