Improving Action Smoothness for a Cascaded Online Learning Flight Control System
- URL: http://arxiv.org/abs/2507.04346v1
- Date: Sun, 06 Jul 2025 11:19:34 GMT
- Title: Improving Action Smoothness for a Cascaded Online Learning Flight Control System
- Authors: Yifei Li, Erik-jan van Kampen,
- Abstract summary: We introduce an online temporal smoothness technique and a low-pass filter to reduce the amplitude and frequency of the control actions.<n> Simulation results demonstrate the improvements achieved by the two proposed techniques.
- Score: 7.871518182413388
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper aims to improve the action smoothness of a cascaded online learning flight control system. Although the cascaded structure is widely used in flight control design, its stability can be compromised by oscillatory control actions, which poses challenges for practical engineering applications. To address this issue, we introduce an online temporal smoothness technique and a low-pass filter to reduce the amplitude and frequency of the control actions. Fast Fourier Transform (FFT) is used to analyze policy performance in the frequency domain. Simulation results demonstrate the improvements achieved by the two proposed techniques.
Related papers
- TARC: Time-Adaptive Robotic Control [48.61871569444481]
Fixed-frequency control in robotics imposes a trade-off between the efficiency of low-frequency control and the robustness of high-frequency control.<n>We address this with a reinforcement learning approach in which policies jointly select control actions and their application durations.<n>We validate our method with zero-shot sim-to-real experiments on two distinct hardware platforms.
arXiv Detail & Related papers (2025-10-27T10:10:19Z) - Attention on flow control: transformer-based reinforcement learning for lift regulation in highly disturbed flows [0.0]
We propose a transformer-based reinforcement learning framework to learn an effective control strategy for regulating aerodynamic lift in gust sequences via pitch control.<n>We show that the learned strategy outperforms the best proportional control, with the performance gap widening as the number of gusts increases.
arXiv Detail & Related papers (2025-06-11T20:14:28Z) - Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - Tuning Legged Locomotion Controllers via Safe Bayesian Optimization [47.87675010450171]
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
We leverage a model-free safe learning algorithm to automate the tuning of control gains, addressing the mismatch between the simplified model used in the control formulation and the real system.
arXiv Detail & Related papers (2023-06-12T13:10:14Z) - Spectrum Breathing: Protecting Over-the-Air Federated Learning Against Interference [73.63024765499719]
Mobile networks can be compromised by interference from neighboring cells or jammers.
We propose Spectrum Breathing, which cascades-gradient pruning and spread spectrum to suppress interference without bandwidth expansion.
We show a performance tradeoff between gradient-pruning and interference-induced error as regulated by the breathing depth.
arXiv Detail & Related papers (2023-05-10T07:05:43Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Learning Variable Impedance Control for Aerial Sliding on Uneven
Heterogeneous Surfaces by Proprioceptive and Tactile Sensing [42.27572349747162]
We present a learning-based adaptive control strategy for aerial sliding tasks.
The proposed controller structure combines data-driven and model-based control methods.
Compared to fine-tuned state of the art interaction control methods we achieve reduced tracking error and improved disturbance rejection.
arXiv Detail & Related papers (2022-06-28T16:28:59Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Neural optimal feedback control with local learning rules [67.5926699124528]
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli.
We introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach.
arXiv Detail & Related papers (2021-11-12T20:02:00Z) - Self-optimizing adaptive optics control with Reinforcement Learning for
high-contrast imaging [0.0]
We describe how model-free Reinforcement Learning can be used to optimize a Recurrent Neural Network controller for closed-loop predictive control.
We show in simulations that our algorithm can also be applied to the control of a high-order deformable mirror.
arXiv Detail & Related papers (2021-08-24T10:02:55Z) - Online Model-Free Reinforcement Learning for the Automatic Control of a
Flexible Wing Aircraft [2.3204178451683264]
The control problem of the flexible wing aircraft is challenging due to the prevailing and high nonlinear deformations.
An online control mechanism based on a value reinforcement learning process is developed for flexible wing aerial structures.
It employs a model-free control policy framework and a guaranteed convergent adaptive learning architecture to solve the system's Bellman optimality equation.
arXiv Detail & Related papers (2021-08-05T06:10:37Z) - Meta-Learning-Based Robust Adaptive Flight Control Under Uncertain Wind
Conditions [13.00214468719929]
Realtime model learning is challenging for complex dynamical systems, such as drones flying in variable wind conditions.
We propose an online composite adaptation method that treats outputs from a deep neural network as a set of basis functions.
We validate our approach by flying a drone in an open air wind tunnel under varying wind conditions and along challenging trajectories.
arXiv Detail & Related papers (2021-03-02T18:43:59Z) - Regularizing Action Policies for Smooth Control with Reinforcement
Learning [47.312768123967025]
Conditioning for Action Policy Smoothness (CAPS) is an effective yet intuitive regularization on action policies.
CAPS offers consistent improvement in the smoothness of the learned state-to-action mappings of neural network controllers.
Tested on a real system, improvements in controller smoothness on a quadrotor drone resulted in an almost 80% reduction in power consumption.
arXiv Detail & Related papers (2020-12-11T21:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.