CONFIG: Constrained Efficient Global Optimization for Closed-Loop
Control System Optimization with Unmodeled Constraints
- URL: http://arxiv.org/abs/2211.11822v1
- Date: Mon, 21 Nov 2022 19:44:00 GMT
- Title: CONFIG: Constrained Efficient Global Optimization for Closed-Loop
Control System Optimization with Unmodeled Constraints
- Authors: Wenjie Xu, Yuning Jiang, Bratislav Svetozarevic, Colin N. Jones
- Abstract summary: The OPT algorithm is applied to optimize the closed-loop control performance of an unknown system with unmodeled constraints.
Results show that our algorithm can achieve performance competitive with the popular CEI (Constrained Expected Improvement) algorithm, which has no known optimality guarantee.
- Score: 11.523746174066702
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, the CONFIG algorithm, a simple and provably efficient
constrained global optimization algorithm, is applied to optimize the
closed-loop control performance of an unknown system with unmodeled
constraints. Existing Gaussian process based closed-loop optimization methods,
either can only guarantee local convergence (e.g., SafeOPT), or have no known
optimality guarantee (e.g., constrained expected improvement) at all, whereas
the recently introduced CONFIG algorithm has been proven to enjoy a theoretical
global optimality guarantee. In this study, we demonstrate the effectiveness of
CONFIG algorithm in the applications. The algorithm is first applied to an
artificial numerical benchmark problem to corroborate its effectiveness. It is
then applied to a classical constrained steady-state optimization problem of a
continuous stirred-tank reactor. Simulation results show that our CONFIG
algorithm can achieve performance competitive with the popular CEI (Constrained
Expected Improvement) algorithm, which has no known optimality guarantee. As
such, the CONFIG algorithm offers a new tool, with both a provable global
optimality guarantee and competitive empirical performance, to optimize the
closed-loop control performance for a system with soft unmodeled constraints.
Last, but not least, the open-source code is available as a python package to
facilitate future applications.
Related papers
- Adaptive Bayesian Optimization for High-Precision Motion Systems [2.073673208115137]
We propose a real-time purely data-driven, model-free approach for adaptive control, by online tuning low-level controller parameters.
We base our algorithm on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization.
We evaluate the algorithm's performance on a real precision-motion system utilized in semiconductor industry applications.
arXiv Detail & Related papers (2024-04-22T21:58:23Z) - Principled Preferential Bayesian Optimization [22.269732173306192]
We study the problem of preferential Bayesian optimization (BO)
We aim to optimize a black-box function with only preference feedback over a pair of candidate solutions.
An optimistic algorithm with an efficient computational method is then developed to solve the problem.
arXiv Detail & Related papers (2024-02-08T02:57:47Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Quantum approximate optimization via learning-based adaptive
optimization [5.399532145408153]
Quantum approximate optimization algorithm (QAOA) is designed to solve objective optimization problems.
Our results demonstrate that the algorithm greatly outperforms conventional approximations in terms of speed, accuracy, efficiency and stability.
This work helps to unlock the full power of QAOA and paves the way toward achieving quantum advantage in practical classical tasks.
arXiv Detail & Related papers (2023-03-27T02:14:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - An Efficient Batch Constrained Bayesian Optimization Approach for Analog
Circuit Synthesis via Multi-objective Acquisition Ensemble [11.64233949999656]
We propose an efficient parallelizable Bayesian optimization algorithm via Multi-objective ACquisition function Ensemble (MACE)
Our proposed algorithm can reduce the overall simulation time by up to 74 times compared to differential evolution (DE) for the unconstrained optimization problem when the batch size is 15.
For the constrained optimization problem, our proposed algorithm can speed up the optimization process by up to 15 times compared to the weighted expected improvement based Bayesian optimization (WEIBO) approach, when the batch size is 15.
arXiv Detail & Related papers (2021-06-28T13:21:28Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.