Bayesian Optimization for Min Max Optimization
- URL: http://arxiv.org/abs/2107.13772v1
- Date: Thu, 29 Jul 2021 06:49:34 GMT
- Title: Bayesian Optimization for Min Max Optimization
- Authors: Dorina Weichert, Alexander Kister
- Abstract summary: We propose algorithms that perform Min Max Optimization in a setting where the function that should be optimized is not known a priori.
We extend the two acquisition functions Expected Improvement and Gaussian Process Upper Confidence Bound.
We show that these acquisition functions allow for better solutions - converging faster to the optimum than the benchmark settings.
- Score: 77.60508571062958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A solution that is only reliable under favourable conditions is hardly a safe
solution. Min Max Optimization is an approach that returns optima that are
robust against worst case conditions. We propose algorithms that perform Min
Max Optimization in a setting where the function that should be optimized is
not known a priori and hence has to be learned by experiments. Therefore we
extend the Bayesian Optimization setting, which is tailored to maximization
problems, to Min Max Optimization problems. While related work extends the two
acquisition functions Expected Improvement and Gaussian Process Upper
Confidence Bound; we extend the two acquisition functions Entropy Search and
Knowledge Gradient. These acquisition functions are able to gain knowledge
about the optimum instead of just looking for points that are supposed to be
optimal. In our evaluation we show that these acquisition functions allow for
better solutions - converging faster to the optimum than the benchmark
settings.
Related papers
- Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Optimistic Optimization of Gaussian Process Samples [30.226274682578172]
A competing, computationally more efficient, global optimization framework is optimistic optimization, which exploits prior knowledge about the geometry of the search space in form of a dissimilarity function.
We argue that there is a new research domain between geometric and probabilistic search, i.e. methods that run drastically faster than traditional Bayesian optimization, while retaining some of the crucial functionality of Bayesian optimization.
arXiv Detail & Related papers (2022-09-02T09:06:24Z) - Accelerated Algorithms for Constrained Nonconvex-Nonconcave Min-Max Optimization and Comonotone Inclusion [9.551565016483833]
We study comonotone min-max optimization, a structured class of non-nonconcave min-max optimization problems.
In our first contribution, we extend the Extra Anchored Gradient (EAG) algorithm to constrained comonotone min-max optimization.
In our second contribution, we extend the Fast Extra Gradient (FEG) algorithm to constrainednotone min-max optimization.
arXiv Detail & Related papers (2022-06-10T17:44:06Z) - Non-Convex Optimization with Certificates and Fast Rates Through Kernel
Sums of Squares [68.8204255655161]
We consider potentially non- optimized approximation problems.
In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees.
arXiv Detail & Related papers (2022-04-11T09:37:04Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.