Multi-level Optimal Control with Neural Surrogate Models
- URL: http://arxiv.org/abs/2402.07763v1
- Date: Mon, 12 Feb 2024 16:28:57 GMT
- Title: Multi-level Optimal Control with Neural Surrogate Models
- Authors: Dante Kalise, Estefan\'ia Loayza-Romero, Kirsten A. Morris, Zhengang
Zhong
- Abstract summary: The evaluation of the optimal closed loop for a given actuator realisation is a computationally demanding task.
The use of neural network surrogates to replace the lower level of the optimisation hierarchy is proposed.
The effectiveness of the proposed surrogate models and optimisation methods is assessed in a test related to optimal actuator location for heat control.
- Score: 0.7646713951724013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimal actuator and control design is studied as a multi-level optimisation
problem, where the actuator design is evaluated based on the performance of the
associated optimal closed loop. The evaluation of the optimal closed loop for a
given actuator realisation is a computationally demanding task, for which the
use of a neural network surrogate is proposed. The use of neural network
surrogates to replace the lower level of the optimisation hierarchy enables the
use of fast gradient-based and gradient-free consensus-based optimisation
methods to determine the optimal actuator design. The effectiveness of the
proposed surrogate models and optimisation methods is assessed in a test
related to optimal actuator location for heat control.
Related papers
- Adaptive Bayesian Optimization for High-Precision Motion Systems [2.073673208115137]
We propose a real-time purely data-driven, model-free approach for adaptive control, by online tuning low-level controller parameters.
We base our algorithm on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization.
We evaluate the algorithm's performance on a real precision-motion system utilized in semiconductor industry applications.
arXiv Detail & Related papers (2024-04-22T21:58:23Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Optimal Energy Shaping via Neural Approximators [16.879710744315233]
We introduce optimal energy shaping as an enhancement of classical passivity-based control methods.
A systematic approach to adjust performance within a passive control framework has yet to be developed.
arXiv Detail & Related papers (2021-01-14T10:25:58Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Iterative Surrogate Model Optimization (ISMO): An active learning
algorithm for PDE constrained optimization with deep neural networks [14.380314061763508]
We present a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO)
This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm.
arXiv Detail & Related papers (2020-08-13T07:31:07Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - NOVAS: Non-convex Optimization via Adaptive Stochastic Search for
End-to-End Learning and Control [22.120942106939122]
We propose the use of adaptive search as a building block for general, non- neural optimization operations.
We benchmark it against two existing alternatives on a synthetic energy-based structured task, and showcase its use in optimal control applications.
arXiv Detail & Related papers (2020-06-22T03:40:36Z) - A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning [95.85269649177336]
ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
arXiv Detail & Related papers (2020-06-11T06:50:35Z) - Adaptive Stochastic Optimization [1.7945141391585486]
Adaptive optimization methods have the potential to offer significant computational savings when training large-scale systems.
Modern approaches based on the gradient method are non-adaptive in the sense that their implementation employs prescribed parameter values that need to be tuned for each application.
arXiv Detail & Related papers (2020-01-18T16:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.