Adjustment formulas for learning causal steady-state models from
closed-loop operational data
- URL: http://arxiv.org/abs/2211.05613v1
- Date: Thu, 10 Nov 2022 14:39:46 GMT
- Title: Adjustment formulas for learning causal steady-state models from
closed-loop operational data
- Authors: Kristian L{\o}vland, Bjarne Grimstad, Lars Struen Imsland
- Abstract summary: We derive a formula for adjusting for control confounding, enabling the estimation of a causal steady-state model from steady-state data.
It works by estimating and taking into account the disturbance which the controller is trying to counteract, and enables learning from data gathered under both feedforward and feedback control.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Steady-state models which have been learned from historical operational data
may be unfit for model-based optimization unless correlations in the training
data which are introduced by control are accounted for. Using recent results
from work on structural dynamical causal models, we derive a formula for
adjusting for this control confounding, enabling the estimation of a causal
steady-state model from closed-loop steady-state data. The formula assumes that
the available data have been gathered under some fixed control law. It works by
estimating and taking into account the disturbance which the controller is
trying to counteract, and enables learning from data gathered under both
feedforward and feedback control.
Related papers
- Semi-supervised Regression Analysis with Model Misspecification and High-dimensional Data [8.619243141968886]
We present an inference framework for estimating regression coefficients in conditional mean models.
We develop an augmented inverse probability weighted (AIPW) method, employing regularized estimators for both propensity score (PS) and outcome regression (OR) models.
Our theoretical findings are verified through extensive simulation studies and a real-world data application.
arXiv Detail & Related papers (2024-06-20T00:34:54Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - LMI-based Data-Driven Robust Model Predictive Control [0.1473281171535445]
We propose a data-driven robust linear matrix inequality-based model predictive control scheme that considers input and state constraints.
The controller stabilizes the closed-loop system and guarantees constraint satisfaction.
arXiv Detail & Related papers (2023-03-08T18:20:06Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - The Impact of Data on the Stability of Learning-Based Control- Extended
Version [63.97366815968177]
We propose a Lyapunov-based measure for quantifying the impact of data on the certifiable control performance.
By modeling unknown system dynamics through Gaussian processes, we can determine the interrelation between model uncertainty and satisfaction of stability conditions.
arXiv Detail & Related papers (2020-11-20T19:10:01Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - How Training Data Impacts Performance in Learning-based Control [67.7875109298865]
This paper derives an analytical relationship between the density of the training data and the control performance.
We formulate a quality measure for the data set, which we refer to as $rho$-gap.
We show how the $rho$-gap can be applied to a feedback linearizing control law.
arXiv Detail & Related papers (2020-05-25T12:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.