Multi-Agent Bayesian Optimization with Coupled Black-Box and Affine
Constraints
- URL: http://arxiv.org/abs/2310.00962v1
- Date: Mon, 2 Oct 2023 08:07:36 GMT
- Title: Multi-Agent Bayesian Optimization with Coupled Black-Box and Affine
Constraints
- Authors: Wenjie Xu, Yuning Jiang, Bratislav Svetozarevic, Colin N. Jones
- Abstract summary: We study the problem of distributed multi-agent Bayesian optimization with both coupled black-box constraints and known affine constraints.
A primal-dual distributed algorithm is proposed that achieves similar regret/violation bounds as those in the single-agent case.
- Score: 21.38692458445459
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper studies the problem of distributed multi-agent Bayesian
optimization with both coupled black-box constraints and known affine
constraints. A primal-dual distributed algorithm is proposed that achieves
similar regret/violation bounds as those in the single-agent case for the
black-box objective and constraint functions. Additionally, the algorithm
guarantees an $\mathcal{O}(N\sqrt{T})$ bound on the cumulative violation for
the known affine constraints, where $N$ is the number of agents. Hence, it is
ensured that the average of the samples satisfies the affine constraints up to
the error $\mathcal{O}({N}/{\sqrt{T}})$. Furthermore, we characterize certain
conditions under which our algorithm can bound a stronger metric of cumulative
violation and provide best-iterate convergence without affine constraint. The
method is then applied to both sampled instances from Gaussian processes and a
real-world optimal power allocation problem for wireless communication; the
results show that our method simultaneously provides close-to-optimal
performance and maintains minor violations on average, corroborating our
theoretical analysis.
Related papers
- Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - No-Regret Constrained Bayesian Optimization of Noisy and Expensive
Hybrid Models using Differentiable Quantile Function Approximations [0.0]
Constrained Upper Quantile Bound (CUQB) is a conceptually simple, deterministic approach that avoids constraint approximations.
We show that CUQB significantly outperforms traditional Bayesian optimization in both constrained and unconstrained cases.
arXiv Detail & Related papers (2023-05-05T19:57:36Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Fast Global Convergence of Policy Optimization for Constrained MDPs [17.825031573375725]
We show that gradient-based methods can achieve an $mathcalO(log(T)/T)$ global convergence rate both for the optimality gap and the constraint violation.
When Slater's condition is satisfied and known a priori, zero constraint violation can be further guaranteed for a sufficiently large $T$.
arXiv Detail & Related papers (2021-10-31T17:46:26Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - A stochastic linearized proximal method of multipliers for convex
stochastic optimization with expectation constraints [8.133190610747974]
We present a computable approximation type algorithm, namely the linearized proximal convex method of multipliers.
Some preliminary numerical results demonstrate the performance of the proposed algorithm.
arXiv Detail & Related papers (2021-06-22T07:24:17Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.