Real-Time Optimization Meets Bayesian Optimization and Derivative-Free
Optimization: A Tale of Modifier Adaptation
- URL: http://arxiv.org/abs/2009.08819v2
- Date: Mon, 1 Feb 2021 16:51:56 GMT
- Title: Real-Time Optimization Meets Bayesian Optimization and Derivative-Free
Optimization: A Tale of Modifier Adaptation
- Authors: Ehecatl Antonio del Rio-Chanona and Panagiotis Petsagkourakis and Eric
Bradford and Jose Eduardo Alves Graciano and Benoit Chachuat
- Abstract summary: This paper investigates a new class of modifier-adaptation schemes to overcome plant-model mismatch in real-time optimization of uncertain processes.
The proposed schemes embed a physical model and rely on trust-region ideas to minimize risk during the exploration.
The benefits of using an acquisition function, knowing the process noise level, or specifying a nominal process model are illustrated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates a new class of modifier-adaptation schemes to
overcome plant-model mismatch in real-time optimization of uncertain processes.
The main contribution lies in the integration of concepts from the areas of
Bayesian optimization and derivative-free optimization. The proposed schemes
embed a physical model and rely on trust-region ideas to minimize risk during
the exploration, while employing Gaussian process regression to capture the
plant-model mismatch in a non-parametric way and drive the exploration by means
of acquisition functions. The benefits of using an acquisition function,
knowing the process noise level, or specifying a nominal process model are
illustrated on numerical case studies, including a semi-batch photobioreactor
optimization problem.
Related papers
- End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Towards Safe Multi-Task Bayesian Optimization [1.3654846342364308]
Reduced physical models of the system can be incorporated into the optimization process, accelerating it.
These models are able to offer an approximation of the actual system, and evaluating them is significantly cheaper.
Safety is a crucial criterion for online optimization methods such as Bayesian optimization.
arXiv Detail & Related papers (2023-12-12T13:59:26Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Implicit Rate-Constrained Optimization of Non-decomposable Objectives [37.43791617018009]
We consider a family of constrained optimization problems arising in machine learning.
Our key idea is to formulate a rate-constrained optimization that expresses the threshold parameter as a function of the model parameters.
We show how the resulting optimization problem can be solved using standard gradient based methods.
arXiv Detail & Related papers (2021-07-23T00:04:39Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Adaptation of Engineering Wake Models using Gaussian Process Regression
and High-Fidelity Simulation Data [0.0]
This article investigates the optimization of yaw control inputs of a nine-turbine wind farm.
The wind farm is simulated using the high-fidelity simulator SOWFA.
arXiv Detail & Related papers (2020-03-30T10:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.