Rollout Algorithms and Approximate Dynamic Programming for Bayesian
Optimization and Sequential Estimation
- URL: http://arxiv.org/abs/2212.07998v1
- Date: Thu, 15 Dec 2022 17:50:23 GMT
- Title: Rollout Algorithms and Approximate Dynamic Programming for Bayesian
Optimization and Sequential Estimation
- Authors: Dimitri Bertsekas
- Abstract summary: We provide a unifying approximate dynamic programming framework that applies to a broad variety of problems involving sequential estimation.
We consider first the construction of surrogate cost functions for the purposes of optimization, and we focus on the special case of Bayesian optimization.
We then discuss the more general case of sequential estimation of a random vector using optimal measurement selection, and its application to problems of adaptive control.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We provide a unifying approximate dynamic programming framework that applies
to a broad variety of problems involving sequential estimation. We consider
first the construction of surrogate cost functions for the purposes of
optimization, and we focus on the special case of Bayesian optimization, using
the rollout algorithm and some of its variations. We then discuss the more
general case of sequential estimation of a random vector using optimal
measurement selection, and its application to problems of stochastic and
adaptive control. We finally consider related search and sequential decoding
problems, and a rollout algorithm for the approximate solution of the Wordle
and Mastermind puzzles, recently developed in the paper [BBB22].
Related papers
- On Constructing Algorithm Portfolios in Algorithm Selection for Computationally Expensive Black-box Optimization in the Fixed-budget Setting [0.0]
This paper argues the importance of considering the number of function evaluations used in the sampling phase when constructing algorithm portfolios.
The results show that algorithm portfolios constructed by our approach perform significantly better than those by the previous approach.
arXiv Detail & Related papers (2024-05-13T03:31:13Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - On Adaptivity in Non-stationary Stochastic Optimization With Bandit
Feedback [11.208914594208654]
We show that, when aggregated function changes is known a priori, a simple re-starting algorithm attains the optimal dynamic regret.
We also establish an additional result showing that any algorithm achieving good regret against stationary benchmarks with high probability could be automatically converted to an algorithm that achieves good regret against dynamic benchmarks.
arXiv Detail & Related papers (2022-10-11T16:16:34Z) - Adaptive First- and Second-Order Algorithms for Large-Scale Machine
Learning [3.0204520109309843]
We consider first- and second-order techniques to address continuous optimization problems in machine learning.
In the first-order case, we propose a framework of transition from semi-deterministic to quadratic regularization methods.
In the second-order case, we propose a novel first-order algorithm with adaptive sampling and adaptive step size.
arXiv Detail & Related papers (2021-11-29T18:10:00Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Adaptive Sampling of Pareto Frontiers with Binary Constraints Using
Regression and Classification [0.0]
We present a novel adaptive optimization algorithm for black-box multi-objective optimization problems with binary constraints.
Our method is based on probabilistic regression and classification models, which act as a surrogate for the optimization goals.
We also present a novel ellipsoid truncation method to speed up the expected hypervolume calculation.
arXiv Detail & Related papers (2020-08-27T09:15:02Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.