Suboptimality analysis of receding horizon quadratic control with unknown linear systems and its applications in learning-based control
- URL: http://arxiv.org/abs/2301.07876v3
- Date: Mon, 16 Dec 2024 19:24:22 GMT
- Title: Suboptimality analysis of receding horizon quadratic control with unknown linear systems and its applications in learning-based control
- Authors: Shengling Shi, Anastasios Tsiamis, Bart De Schutter,
- Abstract summary: We show that for many cases, the prediction horizon can be either one or infinity to improve the control performance.
We show that an adaptive prediction horizon that increases as a logarithmic function of time is beneficial for regret minimization.
- Score: 14.279848166377668
- License:
- Abstract: This work analyzes how the trade-off between the modeling error, the terminal value function error, and the prediction horizon affects the performance of a nominal receding-horizon linear quadratic (LQ) controller. By developing a novel perturbation result of the Riccati difference equation, a novel performance upper bound is obtained and suggests that for many cases, the prediction horizon can be either one or infinity to improve the control performance, depending on the relative difference between the modeling error and the terminal value function error. The result also shows that when an infinite horizon is desired, a finite prediction horizon that is larger than the controllability index can be sufficient for achieving a near-optimal performance, revealing a close relation between the prediction horizon and controllability. The obtained suboptimality performance bound is applied to provide novel sample complexity and regret guarantees for nominal receding-horizon LQ controllers in a learning-based setting. We show that an adaptive prediction horizon that increases as a logarithmic function of time is beneficial for regret minimization.
Related papers
- Error-quantified Conformal Inference for Time Series [40.438171912992864]
Uncertainty quantification in time series prediction is challenging due to the temporal dependence and distribution shift on sequential data.
We propose itError-quantified Conformal Inference (ECI) by smoothing the quantile loss function.
ECI can achieve valid miscoverage control and output tighter prediction sets than other baselines.
arXiv Detail & Related papers (2025-02-02T15:02:36Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Loss Shaping Constraints for Long-Term Time Series Forecasting [79.3533114027664]
We present a Constrained Learning approach for long-term time series forecasting that respects a user-defined upper bound on the loss at each time-step.
We propose a practical Primal-Dual algorithm to tackle it, and aims to demonstrate that it exhibits competitive average performance in time series benchmarks, while shaping the errors across the predicted window.
arXiv Detail & Related papers (2024-02-14T18:20:44Z) - Dimensionality Collapse: Optimal Measurement Selection for Low-Error
Infinite-Horizon Forecasting [3.5788754401889022]
We solve the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram'er-Rao lower bound (CRLB) for forecasting as the cost.
By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived.
This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting.
arXiv Detail & Related papers (2023-03-27T17:25:04Z) - Calibrating Segmentation Networks with Margin-based Label Smoothing [19.669173092632]
We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses.
These losses could be viewed as approximations of a linear penalty imposing equality constraints on logit distances.
We propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
arXiv Detail & Related papers (2022-09-09T20:21:03Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - The Devil is in the Margin: Margin-based Label Smoothing for Network
Calibration [21.63888208442176]
In spite of the dominant performances of deep neural networks, recent works have shown that they are poorly calibrated.
We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses.
We propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
arXiv Detail & Related papers (2021-11-30T14:21:47Z) - Reinforcement Learning of the Prediction Horizon in Model Predictive
Control [1.536989504296526]
We propose to learn the optimal prediction horizon as a function of the state using reinforcement learning (RL)
We show how the RL learning problem can be formulated and test our method on two control tasks, showing clear improvements over the fixed horizon MPC scheme.
arXiv Detail & Related papers (2021-02-22T15:52:32Z) - Regret-Optimal Filtering [57.51328978669528]
We consider the problem of filtering in linear state-space models through the lens of regret optimization.
We formulate a novel criterion for filter design based on the concept of regret between the estimation error energy of a clairvoyant estimator.
We show that the regret-optimal estimator can be easily implemented by solving three Riccati equations and a single Lyapunov equation.
arXiv Detail & Related papers (2021-01-25T19:06:52Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.