Bayesian Optimization for Unknown Cost-Varying Variable Subsets with No-Regret Costs
- URL: http://arxiv.org/abs/2412.15863v1
- Date: Fri, 20 Dec 2024 13:00:39 GMT
- Title: Bayesian Optimization for Unknown Cost-Varying Variable Subsets with No-Regret Costs
- Authors: Vu Viet Hoang, Quoc Anh Hoang Nguyen, Hung Tran The,
- Abstract summary: We propose a novel algorithm for the extension of the BOCVS problem with random and unknown costs.
Our algorithm achieves a sub-linear rate in both quality regret and cost regret, addressing the objective of the BOCVS problem more effectively than previous analyses.
- Score: 3.1269598124014264
- License:
- Abstract: Bayesian Optimization (BO) is a widely-used method for optimizing expensive-to-evaluate black-box functions. Traditional BO assumes that the learner has full control over all query variables without additional constraints. However, in many real-world scenarios, controlling certain query variables may incur costs. Therefore, the learner needs to balance the selection of informative subsets for targeted learning against leaving some variables to be randomly sampled to minimize costs. This problem is known as Bayesian Optimization with cost-varying variable subsets (BOCVS). While the goal of BOCVS is to identify the optimal solution with minimal cost, previous works have only guaranteed finding the optimal solution without considering the total costs incurred. Moreover, these works assume precise knowledge of the cost for each subset, which is often unrealistic. In this paper, we propose a novel algorithm for the extension of the BOCVS problem with random and unknown costs that separates the process into exploration and exploitation phases. The exploration phase will filter out low-quality variable subsets, while the exploitation phase will leverage high-quality ones. Furthermore, we theoretically demonstrate that our algorithm achieves a sub-linear rate in both quality regret and cost regret, addressing the objective of the BOCVS problem more effectively than previous analyses. Finally, we show that our proposed algorithm outperforms comparable baselines across a wide range of benchmarks.
Related papers
- Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - An adaptive approach to Bayesian Optimization with switching costs [0.8246494848934447]
We investigate modifications to Bayesian Optimization for a resource-constrained setting of sequential experimental design.
We adapt two process-constrained batch algorithms to this sequential problem formulation, and propose two new methods: one cost-aware and one cost-ignorant.
arXiv Detail & Related papers (2024-05-14T21:55:02Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - A Near-Optimal Algorithm for Univariate Zeroth-Order Budget Convex
Optimization [4.608510640547952]
We prove near-optimal optimization error guarantees for Dy Search.
We show that the classical dependence on the global Lipschitz constant in the error bounds is an artifact of the granularity of the budget.
arXiv Detail & Related papers (2022-08-13T19:57:04Z) - Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs [28.254408148839644]
We propose a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous evaluation costs.
Our acquisition function outperforms existing methods in a variety of synthetic and real problems.
arXiv Detail & Related papers (2021-11-12T02:18:26Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - A Nonmyopic Approach to Cost-Constrained Bayesian Optimization [10.078368988372247]
We formulate cost-constrained BO as a constrained Markov decision process (CMDP)
We develop an efficient rollout approximation to the optimal CMDP policy that takes both the cost and future iterations into account.
arXiv Detail & Related papers (2021-06-10T22:44:37Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z) - Cost-aware Bayesian Optimization [6.75013674088437]
Cost-aware BO measures convergence with alternative cost metrics such as time, energy, or money.
We introduce Cost Apportioned BO (CArBO), which attempts to minimize an objective function in as little cost as possible.
arXiv Detail & Related papers (2020-03-22T14:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.