A Heuristic for Dynamic Output Predictive Control Design for Uncertain
Nonlinear Systems
- URL: http://arxiv.org/abs/2102.02268v1
- Date: Wed, 3 Feb 2021 20:01:25 GMT
- Title: A Heuristic for Dynamic Output Predictive Control Design for Uncertain
Nonlinear Systems
- Authors: Mazen Alamir
- Abstract summary: An efficient construction of the learning data set is proposed in which each solution provides many samples in the learning data.
The proposed solution recovers up to 78% of the expected advantage of having a perfect knowledge of the parameters compared to nominal design.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a simple heuristic is proposed for the design of uncertainty
aware predictive controllers for nonlinear models involving uncertain
parameters. The method relies on Machine Learning-based approximation of ideal
deterministic MPC solutions with perfectly known parameters. An efficient
construction of the learning data set from these off-line solutions is proposed
in which each solution provides many samples in the learning data. This enables
a drastic reduction of the required number of Non Linear Programming problems
to be solved off-line while explicitly exploiting the statistics of the
parameters dispersion. The learning data is then used to design a fast on-line
output dynamic feedback that explicitly incorporate information of the
statistics of the parameters dispersion. An example is provided to illustrate
the efficiency and the relevance of the proposed framework. It is in particular
shown that the proposed solution recovers up to 78\% of the expected advantage
of having a perfect knowledge of the parameters compared to nominal design.
Related papers
- Nonparametric Bellman Mappings for Reinforcement Learning: Application to Robust Adaptive Filtering [3.730504020733928]
This paper designs novel nonparametric Bellman mappings in reproducing kernel Hilbert spaces (RKHSs) for reinforcement learning (RL)
The proposed mappings benefit from the rich approximating properties of RKHSs, adopt no assumptions on the statistics of the data owing to their nonparametric nature, and may operate without any training data.
As an application, the proposed mappings are employed to offer a novel solution to the problem of countering outliers in adaptive filtering.
arXiv Detail & Related papers (2024-03-29T07:15:30Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Robust identification of non-autonomous dynamical systems using
stochastic dynamics models [0.0]
This paper considers the problem of system identification (ID) of linear and nonlinear non-autonomous systems from noisy and sparse data.
We propose and analyze an objective function derived from a Bayesian formulation for learning a hidden Markov model.
We show that our proposed approach has improved smoothness and inherent regularization that make it well-suited for system ID.
arXiv Detail & Related papers (2022-12-20T16:36:23Z) - Reduced order modeling of parametrized systems through autoencoders and
SINDy approach: continuation of periodic solutions [0.0]
This work presents a data-driven, non-intrusive framework which combines ROM construction with reduced dynamics identification.
The proposed approach leverages autoencoder neural networks with parametric sparse identification of nonlinear dynamics (SINDy) to construct a low-dimensional dynamical model.
These aim at tracking the evolution of periodic steady-state responses as functions of system parameters, avoiding the computation of the transient phase, and allowing to detect instabilities and bifurcations.
arXiv Detail & Related papers (2022-11-13T01:57:18Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - Variational Nonlinear System Identification [0.8793721044482611]
This paper considers parameter estimation for nonlinear state-space models, which is an important but challenging problem.
We employ a variational inference (VI) approach, which is a principled method that has deep connections to maximum likelihood estimation.
This VI approach ultimately provides estimates of the model as solutions to an optimisation problem, which is deterministic, tractable and can be solved using standard optimisation tools.
arXiv Detail & Related papers (2020-12-08T05:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.