Online Constraint Tightening in Stochastic Model Predictive Control: A
Regression Approach
- URL: http://arxiv.org/abs/2310.02942v1
- Date: Wed, 4 Oct 2023 16:22:02 GMT
- Title: Online Constraint Tightening in Stochastic Model Predictive Control: A
Regression Approach
- Authors: Alexandre Capone, Tim Br\"udigam, Sandra Hirche
- Abstract summary: No analytical solutions exist for chance-constrained optimal control problems.
We propose a data-driven approach for learning the constraint-tightening parameters online during control.
Our approach yields constraint-tightening parameters that tightly satisfy the chance constraints.
- Score: 49.056933332667114
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Solving chance-constrained stochastic optimal control problems is a
significant challenge in control. This is because no analytical solutions exist
for up to a handful of special cases. A common and computationally efficient
approach for tackling chance-constrained stochastic optimal control problems
consists of reformulating the chance constraints as hard constraints with a
constraint-tightening parameter. However, in such approaches, the choice of
constraint-tightening parameter remains challenging, and guarantees can mostly
be obtained assuming that the process noise distribution is known a priori.
Moreover, the chance constraints are often not tightly satisfied, leading to
unnecessarily high costs. This work proposes a data-driven approach for
learning the constraint-tightening parameters online during control. To this
end, we reformulate the choice of constraint-tightening parameter for the
closed-loop as a binary regression problem. We then leverage a highly
expressive \gls{gp} model for binary regression to approximate the smallest
constraint-tightening parameters that satisfy the chance constraints. By tuning
the algorithm parameters appropriately, we show that the resulting
constraint-tightening parameters satisfy the chance constraints up to an
arbitrarily small margin with high probability. Our approach yields
constraint-tightening parameters that tightly satisfy the chance constraints in
numerical experiments, resulting in a lower average cost than three other
state-of-the-art approaches.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - How Free is Parameter-Free Stochastic Optimization? [29.174036532175855]
We study the problem of parameter-free optimization, inquiring whether, and under what conditions, do fully parameter-free methods exist.
Existing methods can only be considered partially'' parameter-free, as they require some non-trivial knowledge of the true problem parameters.
We demonstrate that a simple hyper search technique results in a fully parameter-free method that outperforms more sophisticated state-the-art algorithms.
arXiv Detail & Related papers (2024-02-05T15:51:49Z) - Primal-Dual Contextual Bayesian Optimization for Control System Online
Optimization with Time-Average Constraints [21.38692458445459]
This paper studies the problem of online performance optimization of constrained closed-loop control systems.
A primal-dual contextual Bayesian optimization algorithm is proposed that achieves sublinear cumulative regret with respect to the dynamic optimal solution.
arXiv Detail & Related papers (2023-04-12T18:37:52Z) - Margin theory for the scenario-based approach to robust optimization in
high dimension [0.0]
This paper deals with the scenario approach to robust optimization.
This relies on a random sampling of the possibly infinite number of constraints induced by uncertainties in a problem.
arXiv Detail & Related papers (2023-03-07T13:33:46Z) - On data-driven chance constraint learning for mixed-integer optimization
problems [0.0]
We develop a Chance Constraint Learning (CCL) methodology with a focus on mixed-integer linear optimization problems.
CCL makes use of linearizable machine learning models to estimate conditional quantiles of the learned variables.
An open-access software has been developed to be used by practitioners.
arXiv Detail & Related papers (2022-07-08T11:54:39Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.