Counterfactual Credit Guided Bayesian Optimization
- URL: http://arxiv.org/abs/2510.04676v1
- Date: Mon, 06 Oct 2025 10:34:50 GMT
- Title: Counterfactual Credit Guided Bayesian Optimization
- Authors: Qiyu Wei, Haowei Wang, Richard Allmendinger, Mauricio A. Álvarez,
- Abstract summary: We introduce Counterfactual Credit Guided Bayesian Optimization (CCGBO)<n>CCGBO explicitly quantifies the contribution of individual historical observations through counterfactual credit.<n>We prove that CCGBO consistently reduces simple regret and accelerates convergence to the global optimum.
- Score: 10.457185142015442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization has emerged as a prominent methodology for optimizing expensive black-box functions by leveraging Gaussian process surrogates, which focus on capturing the global characteristics of the objective function. However, in numerous practical scenarios, the primary objective is not to construct an exhaustive global surrogate, but rather to quickly pinpoint the global optimum. Due to the aleatoric nature of the sequential optimization problem and its dependence on the quality of the surrogate model and the initial design, it is restrictive to assume that all observed samples contribute equally to the discovery of the optimum in this context. In this paper, we introduce Counterfactual Credit Guided Bayesian Optimization (CCGBO), a novel framework that explicitly quantifies the contribution of individual historical observations through counterfactual credit. By incorporating counterfactual credit into the acquisition function, our approach can selectively allocate resources in areas where optimal solutions are most likely to occur. We prove that CCGBO retains sublinear regret. Empirical evaluations on various synthetic and real-world benchmarks demonstrate that CCGBO consistently reduces simple regret and accelerates convergence to the global optimum.
Related papers
- Optimizing the Unknown: Black Box Bayesian Optimization with Energy-Based Model and Reinforcement Learning [42.508822373669936]
Black-Box Optimization (BBO) has achieved success across various scientific and engineering domains.<n>We propose the Reinforced Energy-Based Model for Bayesian Optimization (REBMBO), which integrates Gaussian Processes (GP) for local guidance with an Energy-Based Model (EBM) to capture global structural information.<n>We conduct extensive experiments on synthetic and real-world benchmarks, confirming the superior performance of REBMBO.
arXiv Detail & Related papers (2025-10-22T12:36:49Z) - A Novel Unified Parametric Assumption for Nonconvex Optimization [53.943470475510196]
Non optimization is central to machine learning, but the general framework non convexity enables weak convergence guarantees too pessimistic compared to the other hand.<n>We introduce a novel unified assumption in non convex algorithms.
arXiv Detail & Related papers (2025-02-17T21:25:31Z) - Every Call is Precious: Global Optimization of Black-Box Functions with Unknown Lipschitz Constants [72.47160195925056]
We introduce Call Precious (P), a novel approach for global optimization of Lipschitz continuous functions.<n>P eliminates the need to estimate the Lipschitz constant, thereby minimizing additional function.<n>EC guarantees no-regret performance for evaluation budgets.
arXiv Detail & Related papers (2025-02-06T18:34:40Z) - Nonmyopic Global Optimisation via Approximate Dynamic Programming [14.389086937116582]
We introduce novel nonmyopic acquisition strategies tailored to IDW- and RBF-based global optimisation.<n>Specifically, we develop dynamic programming-based paradigms, including rollout and multi-step scenario-based optimisation schemes.
arXiv Detail & Related papers (2024-12-06T09:25:00Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - A General Framework for User-Guided Bayesian Optimization [51.96352579696041]
We propose ColaBO, the first Bayesian-principled framework for prior beliefs beyond the typical kernel structure.
We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
arXiv Detail & Related papers (2023-11-24T18:27:26Z) - Pseudo-Bayesian Optimization [7.556071491014536]
We study an axiomatic framework that elicits the minimal requirements to guarantee black-box optimization convergence.<n>We show how using simple local regression, and a suitable "randomized prior" construction to quantify uncertainty, not only guarantees convergence but also consistently outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2023-10-15T07:55:28Z) - Deterministic Langevin Unconstrained Optimization with Normalizing Flows [3.988614978933934]
We introduce a global, free surrogate optimization strategy for black-box functions inspired by the Fokker-Planck and Langevin equations.
We demonstrate superior competitive progress toward objective optima on standard synthetic test functions.
arXiv Detail & Related papers (2023-10-01T17:46:20Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - AGGLIO: Global Optimization for Locally Convex Functions [5.221860952360943]
This paper presents AGG (Accelerated Optimization Generalized LInear-model) a stage-wise, global technique that offers provable convergence problems.
AGG can be readily implemented using point as A-batch SGD updates and offers provable convergence as well as convergent experiments.
arXiv Detail & Related papers (2021-11-06T18:15:56Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.