Continuous Lyapunov Controller and Chaotic Non-linear System
Optimization using Deep Machine Learning
- URL: http://arxiv.org/abs/2010.14746v4
- Date: Sun, 31 Oct 2021 17:07:31 GMT
- Title: Continuous Lyapunov Controller and Chaotic Non-linear System
Optimization using Deep Machine Learning
- Authors: Amr Mahmoud, Youmna Ismaeil and Mohamed Zohdy
- Abstract summary: We present a novel approach for detecting early failure indicators of non-linear highly chaotic system.
The approach proposed continuously monitors the system and controller signals.
The deep neural model predicts the parameter values that would best counteract the expected system in-stability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The introduction of unexpected system disturbances and new system dynamics
does not allow guaranteed continuous system stability. In this research we
present a novel approach for detecting early failure indicators of non-linear
highly chaotic system and accordingly predict the best parameter calibrations
to offset such instability using deep machine learning regression model. The
approach proposed continuously monitors the system and controller signals. The
Re-calibration of the system and controller parameters is triggered according
to a set of conditions designed to maintain system stability without compromise
to the system speed, intended outcome or required processing power. The deep
neural model predicts the parameter values that would best counteract the
expected system in-stability. To demonstrate the effectiveness of the proposed
approach, it is applied to the non-linear complex combination of Duffing Van
der pol oscillators. The approach is also tested under different scenarios the
system and controller parameters are initially chosen incorrectly or the system
parameters are changed while running or new system dynamics are introduced
while running to measure effectiveness and reaction time.
Related papers
- Iterative Learning Control of Fast, Nonlinear, Oscillatory Dynamics (Preprint) [0.0]
nonlinear, chaotic, and are often too fast for active control schemes.
We develop an alternative active controls system using an iterative, trajectory-optimization and parameter-tuning approach.
We demonstrate that the controller is robust to missing information and uncontrollable parameters as long as certain requirements are met.
arXiv Detail & Related papers (2024-05-30T13:27:17Z) - Parameter-Adaptive Approximate MPC: Tuning Neural-Network Controllers without Retraining [50.00291020618743]
This work introduces a novel, parameter-adaptive AMPC architecture capable of online tuning without recomputing large datasets and retraining.
We showcase the effectiveness of parameter-adaptive AMPC by controlling the swing-ups of two different real cartpole systems with a severely resource-constrained microcontroller (MCU)
Taken together, these contributions represent a marked step toward the practical application of AMPC in real-world systems.
arXiv Detail & Related papers (2024-04-08T20:02:19Z) - Runtime Monitoring and Fault Detection for Neural Network-Controlled Systems [4.749824105387292]
This paper considers enhancing the runtime safety of nonlinear systems controlled by neural networks in the presence of disturbance and measurement noise.
A robustly stable interval observer is designed to generate sound and precise lower and upper bounds for the neural network, nonlinear function, and system state.
arXiv Detail & Related papers (2024-03-24T13:03:27Z) - Optimal Exploration for Model-Based RL in Nonlinear Systems [14.540210895533937]
Learning to control unknown nonlinear dynamical systems is a fundamental problem in reinforcement learning and control theory.
We develop an algorithm able to efficiently explore the system to reduce uncertainty in a task-dependent metric.
Our algorithm relies on a general reduction from policy optimization to optimal experiment design in arbitrary systems, and may be of independent interest.
arXiv Detail & Related papers (2023-06-15T15:47:50Z) - Stability Bounds for Learning-Based Adaptive Control of Discrete-Time
Multi-Dimensional Stochastic Linear Systems with Input Constraints [3.8004168340068336]
We consider the problem of adaptive stabilization for discrete-time, multi-dimensional systems with bounded control input constraints and unbounded disturbances.
We propose a certainty-equivalent control scheme which combines online parameter estimation with saturated linear control.
arXiv Detail & Related papers (2023-04-02T16:38:13Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Concurrent Learning Based Tracking Control of Nonlinear Systems using
Gaussian Process [2.7930955543692817]
This paper demonstrates the applicability of the combination of concurrent learning as a tool for parameter estimation and non-parametric Gaussian Process for online disturbance learning.
A control law is developed by using both techniques sequentially in the context of feedback linearization.
The closed-loop system stability for the nth-order system is proven using the Lyapunov stability theorem.
arXiv Detail & Related papers (2021-06-02T02:59:48Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Stability and Identification of Random Asynchronous Linear
Time-Invariant Systems [81.02274958043883]
We show the additional benefits of randomization and asynchrony on the stability of linear dynamical systems.
For unknown randomized LTI systems, we propose a systematic identification method to recover the underlying dynamics.
arXiv Detail & Related papers (2020-12-08T02:00:04Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.