Gaussian Max-Value Entropy Search for Multi-Agent Bayesian Optimization
- URL: http://arxiv.org/abs/2303.05694v1
- Date: Fri, 10 Mar 2023 04:14:30 GMT
- Title: Gaussian Max-Value Entropy Search for Multi-Agent Bayesian Optimization
- Authors: Haitong Ma, Tianpeng Zhang, Yixuan Wu, Flavio P. Calmon, Na Li
- Abstract summary: We study the multi-agent Bayesian optimization (BO) problem, where multiple agents maximize a black-box function via iterative queries.
One of the main challenges of Entropy Search (ES) is that calculating the mutual information requires computationally-costly approximation techniques.
We propose the Gaussian Max-value Entropy Search, a multi-agent BO algorithm with favorable sample and computational efficiency.
- Score: 4.8244546750150965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the multi-agent Bayesian optimization (BO) problem, where multiple
agents maximize a black-box function via iterative queries. We focus on Entropy
Search (ES), a sample-efficient BO algorithm that selects queries to maximize
the mutual information about the maximum of the black-box function. One of the
main challenges of ES is that calculating the mutual information requires
computationally-costly approximation techniques. For multi-agent BO problems,
the computational cost of ES is exponential in the number of agents. To address
this challenge, we propose the Gaussian Max-value Entropy Search, a multi-agent
BO algorithm with favorable sample and computational efficiency. The key to our
idea is to use a normal distribution to approximate the function maximum and
calculate its mutual information accordingly. The resulting approximation
allows queries to be cast as the solution of a closed-form optimization problem
which, in turn, can be solved via a modified gradient ascent algorithm and
scaled to a large number of agents. We demonstrate the effectiveness of
Gaussian max-value Entropy Search through numerical experiments on standard
test functions and real-robot experiments on the source-seeking problem.
Results show that the proposed algorithm outperforms the multi-agent BO
baselines in the numerical experiments and can stably seek the source with a
limited number of noisy observations on real robots.
Related papers
- Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and Applications [79.53938312089308]
The MIDX-Sampler is a novel adaptive sampling strategy based on an inverted multi-index approach.
Our method is backed by rigorous theoretical analysis, addressing key concerns such as sampling bias, gradient bias, convergence rates, and generalization error bounds.
arXiv Detail & Related papers (2025-01-15T04:09:21Z) - An Analysis of Quantum Annealing Algorithms for Solving the Maximum Clique Problem [49.1574468325115]
We analyse the ability of quantum D-Wave annealers to find the maximum clique on a graph, expressed as a QUBO problem.
We propose a decomposition algorithm for the complementary maximum independent set problem, and a graph generation algorithm to control the number of nodes, the number of cliques, the density, the connectivity indices and the ratio of the solution size to the number of other nodes.
arXiv Detail & Related papers (2024-06-11T04:40:05Z) - Runtime Analysis of a Multi-Valued Compact Genetic Algorithm on Generalized OneMax [2.07180164747172]
We provide a first runtime analysis of a generalized OneMax function.
We show that the r-cGA solves this r-valued OneMax problem efficiently.
At the end of experiments, we state one conjecture related to the expected runtime of another variant of multi-valued OneMax function.
arXiv Detail & Related papers (2024-04-17T10:40:12Z) - Bivariate Estimation-of-Distribution Algorithms Can Find an Exponential
Number of Optima [12.009357100208353]
We propose the test function EqualBlocksOneMax (EBOM) to support the study of how optimization algorithms handle large sets of optima.
We show that EBOM behaves very similarly to a theoretically ideal model for EBOM, which samples each of the exponentially many optima with the same maximal probability.
arXiv Detail & Related papers (2023-10-06T06:32:07Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Beyond EM Algorithm on Over-specified Two-Component Location-Scale
Gaussian Mixtures [29.26015093627193]
We develop the Exponential Location Update (ELU) algorithm to efficiently explore the curvature of the negative log-likelihood functions.
We demonstrate that the ELU algorithm converges to the final statistical radius of the models after a logarithmic number of iterations.
arXiv Detail & Related papers (2022-05-23T06:49:55Z) - An Application of a Multivariate Estimation of Distribution Algorithm to
Cancer Chemotherapy [59.40521061783166]
Chemotherapy treatment for cancer is a complex optimisation problem with a large number of interacting variables and constraints.
We show that the more sophisticated algorithm would yield better performance on a complex problem like this.
We hypothesise that this is caused by the more sophisticated algorithm being impeded by the large number of interactions in the problem.
arXiv Detail & Related papers (2022-05-17T15:28:46Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Differentiable Top-k Operator with Optimal Transport [135.36099648554054]
The SOFT top-k operator approximates the output of the top-k operation as the solution of an Entropic Optimal Transport (EOT) problem.
We apply the proposed operator to the k-nearest neighbors and beam search algorithms, and demonstrate improved performance.
arXiv Detail & Related papers (2020-02-16T04:57:52Z) - Best Principal Submatrix Selection for the Maximum Entropy Sampling
Problem: Scalable Algorithms and Performance Guarantees [1.7640556247739623]
MESP has been widely applied to many areas, including healthcare, power system, manufacturing and data science.
We derive a novel convex integer program for MESP and show that its continuous relaxation yields a near-optimal solution.
Our numerical experiments demonstrate that these approximation algorithms can efficiently solve medium-sized and large-scale instances to near-optimality.
arXiv Detail & Related papers (2020-01-23T14:14:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.