Nonlinear Control Allocation: A Learning Based Approach
- URL: http://arxiv.org/abs/2201.06180v2
- Date: Wed, 27 Mar 2024 16:45:26 GMT
- Title: Nonlinear Control Allocation: A Learning Based Approach
- Authors: Hafiz Zeeshan Iqbal Khan, Surrayya Mobeen, Jahanzeb Rajput, Jamshed Riaz,
- Abstract summary: Modern aircraft are designed with redundant control effectors to cater for fault tolerance and maneuverability requirements.
This leads to aircraft being over-actuated and requires control allocation schemes to distribute the control commands among control effectors.
Traditionally, optimization-based control allocation schemes are used; however, for nonlinear allocation problems, these methods require large computational resources.
In this work, an artificial neural network (ANN) based nonlinear control allocation scheme is proposed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Modern aircraft are designed with redundant control effectors to cater for fault tolerance and maneuverability requirements. This leads to aircraft being over-actuated and requires control allocation schemes to distribute the control commands among control effectors. Traditionally, optimization-based control allocation schemes are used; however, for nonlinear allocation problems, these methods require large computational resources. In this work, an artificial neural network (ANN) based nonlinear control allocation scheme is proposed. The proposed scheme is composed of learning the inverse of the control effectiveness map through ANN, and then implementing it as an allocator instead of solving an online optimization problem. Stability conditions are presented for closed-loop systems incorporating the allocator, and computational challenges are explored with piece-wise linear effectiveness functions and ANN-based allocators. To demonstrate the efficacy of the proposed scheme, it is compared with a standard quadratic programming-based method for control allocation.
Related papers
- Resource Optimization for Tail-Based Control in Wireless Networked Control Systems [31.144888314890597]
Achieving control stability is one of the key design challenges of scalable Wireless Networked Control Systems.
This paper explores the use of an alternative control concept defined as tail-based control, which extends the classical Linear Quadratic Regulator (LQR) cost function for multiple dynamic control systems over a shared wireless network.
arXiv Detail & Related papers (2024-06-20T13:27:44Z) - Model-Free Load Frequency Control of Nonlinear Power Systems Based on
Deep Reinforcement Learning [29.643278858113266]
This paper proposes a model-free LFC method for nonlinear power systems based on deep deterministic policy gradient (DDPG) framework.
The controller can generate appropriate control actions and has strong adaptability for nonlinear power systems.
arXiv Detail & Related papers (2024-03-07T10:06:46Z) - Reinforcement Learning with Model Predictive Control for Highway Ramp Metering [14.389086937116582]
This work explores the synergy between model-based and learning-based strategies to enhance traffic flow management.
The control problem is formulated as an RL task by crafting a suitable stage cost function.
An MPC-based RL approach, which leverages the MPC optimal problem as a function approximation for the RL algorithm, is proposed to learn to efficiently control an on-ramp.
arXiv Detail & Related papers (2023-11-15T09:50:54Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Deep Learning for Wireless Networked Systems: a joint
Estimation-Control-Scheduling Approach [47.29474858956844]
Wireless networked control system (WNCS) connecting sensors, controllers, and actuators via wireless communications is a key enabling technology for highly scalable and low-cost deployment of control systems in the Industry 4.0 era.
Despite the tight interaction of control and communications in WNCSs, most existing works adopt separative design approaches.
We propose a novel deep reinforcement learning (DRL)-based algorithm for controller and optimization utilizing both model-free and model-based data.
arXiv Detail & Related papers (2022-10-03T01:29:40Z) - Deep Koopman Operator with Control for Nonlinear Systems [44.472875714432504]
We propose an end-to-end deep learning framework to learn the Koopman embedding function and Koopman Operator.
We first parameterize the embedding function and Koopman Operator with the neural network and train them end-to-end with the K-steps loss function.
We then design an auxiliary control network to encode the nonlinear state-dependent control term to model the nonlinearity in control input.
arXiv Detail & Related papers (2022-02-16T11:40:36Z) - Stable Online Control of Linear Time-Varying Systems [49.41696101740271]
COCO-LQ is an efficient online control algorithm that guarantees input-to-state stability for a large class of LTV systems.
We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.
arXiv Detail & Related papers (2021-04-29T06:18:49Z) - Data-Driven Optimized Tracking Control Heuristic for MIMO Structures: A
Balance System Case Study [8.035375408614776]
The PID is illustrated on a two-input two-output balance system.
It integrates a self-adjusting nonlinear threshold with a neural network to compromise between the desired transient and steady state characteristics.
The neural network is trained upon optimizing a weighted-derivative like objective cost function.
arXiv Detail & Related papers (2021-04-01T02:00:20Z) - Anticipating the Long-Term Effect of Online Learning in Control [75.6527644813815]
AntLer is a design algorithm for learning-based control laws that anticipates learning.
We show that AntLer approximates an optimal solution arbitrarily accurately with probability one.
arXiv Detail & Related papers (2020-07-24T07:00:14Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.