Bayesian Optimization for Policy Search in High-Dimensional Systems via
Automatic Domain Selection
- URL: http://arxiv.org/abs/2001.07394v1
- Date: Tue, 21 Jan 2020 09:04:15 GMT
- Title: Bayesian Optimization for Policy Search in High-Dimensional Systems via
Automatic Domain Selection
- Authors: Lukas P. Fr\"ohlich, Edgar D. Klenske, Christian G. Daniel, Melanie N.
Zeilinger
- Abstract summary: We propose to leverage results from optimal control to scale BO to higher dimensional control tasks.
We show how we can make use of a learned dynamics model in combination with a model-based controller to simplify the BO problem.
We present an experimental evaluation on real hardware, as well as simulated tasks including a 48-dimensional policy for a quadcopter.
- Score: 1.1240669509034296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian Optimization (BO) is an effective method for optimizing
expensive-to-evaluate black-box functions with a wide range of applications for
example in robotics, system design and parameter optimization. However, scaling
BO to problems with large input dimensions (>10) remains an open challenge. In
this paper, we propose to leverage results from optimal control to scale BO to
higher dimensional control tasks and to reduce the need for manually selecting
the optimization domain. The contributions of this paper are twofold: 1) We
show how we can make use of a learned dynamics model in combination with a
model-based controller to simplify the BO problem by focusing onto the most
relevant regions of the optimization domain. 2) Based on (1) we present a
method to find an embedding in parameter space that reduces the effective
dimensionality of the optimization problem. To evaluate the effectiveness of
the proposed approach, we present an experimental evaluation on real hardware,
as well as simulated tasks including a 48-dimensional policy for a quadcopter.
Related papers
- Adaptive Bayesian Optimization for High-Precision Motion Systems [2.073673208115137]
We propose a real-time purely data-driven, model-free approach for adaptive control, by online tuning low-level controller parameters.
We base our algorithm on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization.
We evaluate the algorithm's performance on a real precision-motion system utilized in semiconductor industry applications.
arXiv Detail & Related papers (2024-04-22T21:58:23Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - High-dimensional Bayesian Optimization with Group Testing [7.12295305987761]
We propose a group testing approach to identify active variables to facilitate efficient optimization in high-dimensional domains.
The proposed algorithm, Group Testing Bayesian Optimization (GTBO), first runs a testing phase where groups of variables are systematically selected and tested.
In the second phase, GTBO guides optimization by placing more importance on the active dimensions.
arXiv Detail & Related papers (2023-10-05T12:52:27Z) - Predictive Modeling through Hyper-Bayesian Optimization [60.586813904500595]
We propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.
The algorithm moves back and forth between BO in the model space and BO in the function space, where the goodness of the recommended model is captured.
In addition to improved sample efficiency, the framework outputs information about the black-box function.
arXiv Detail & Related papers (2023-08-01T04:46:58Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Towards Automated Design of Bayesian Optimization via Exploratory
Landscape Analysis [11.143778114800272]
We show that a dynamic selection of the AF can benefit the BO design.
We pave a way towards AutoML-assisted, on-the-fly BO designs that adjust their behavior on a run-by-run basis.
arXiv Detail & Related papers (2022-11-17T17:15:04Z) - Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection [0.5439020425818999]
We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
arXiv Detail & Related papers (2021-09-20T01:55:43Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.