Optimizing Closed-Loop Performance with Data from Similar Systems: A
Bayesian Meta-Learning Approach
- URL: http://arxiv.org/abs/2211.00077v1
- Date: Mon, 31 Oct 2022 18:25:47 GMT
- Title: Optimizing Closed-Loop Performance with Data from Similar Systems: A
Bayesian Meta-Learning Approach
- Authors: Ankush Chakrabarty
- Abstract summary: We propose the use of meta-learning to generate an initial surrogate model based on data collected from performance optimization tasks.
The effectiveness of our proposed DKN-BO approach for speeding up control system performance optimization is demonstrated.
- Score: 1.370633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) has demonstrated potential for optimizing control
performance in data-limited settings, especially for systems with unknown
dynamics or unmodeled performance objectives. The BO algorithm efficiently
trades-off exploration and exploitation by leveraging uncertainty estimates
using surrogate models. These surrogates are usually learned using data
collected from the target dynamical system to be optimized. Intuitively, the
convergence rate of BO is better for surrogate models that can accurately
predict the target system performance. In classical BO, initial surrogate
models are constructed using very limited data points, and therefore rarely
yield accurate predictions of system performance. In this paper, we propose the
use of meta-learning to generate an initial surrogate model based on data
collected from performance optimization tasks performed on a variety of systems
that are different to the target system. To this end, we employ deep kernel
networks (DKNs) which are simple to train and which comprise encoded Gaussian
process models that integrate seamlessly with classical BO. The effectiveness
of our proposed DKN-BO approach for speeding up control system performance
optimization is demonstrated using a well-studied nonlinear system with unknown
dynamics and an unmodeled performance function.
Related papers
- Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment [104.18002641195442]
We introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm that does not require existing paired data.
Building on the self-play concept, which autonomously generates negative responses, we further incorporate an off-policy learning pipeline to enhance data exploration and exploitation.
arXiv Detail & Related papers (2024-05-31T14:21:04Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - From Function to Distribution Modeling: A PAC-Generative Approach to
Offline Optimization [30.689032197123755]
This paper considers the problem of offline optimization, where the objective function is unknown except for a collection of offline" data examples.
Instead of learning and then optimizing the unknown objective function, we take on a less intuitive but more direct view that optimization can be thought of as a process of sampling from a generative model.
arXiv Detail & Related papers (2024-01-04T01:32:50Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Towards Automated Design of Bayesian Optimization via Exploratory
Landscape Analysis [11.143778114800272]
We show that a dynamic selection of the AF can benefit the BO design.
We pave a way towards AutoML-assisted, on-the-fly BO designs that adjust their behavior on a run-by-run basis.
arXiv Detail & Related papers (2022-11-17T17:15:04Z) - Sparse Bayesian Optimization [16.867375370457438]
We present several regularization-based approaches that allow us to discover sparse and more interpretable configurations.
We propose a novel differentiable relaxation based on homotopy continuation that makes it possible to target sparsity.
We show that we are able to efficiently optimize for sparsity.
arXiv Detail & Related papers (2022-03-03T18:25:33Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.