Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation
- URL: http://arxiv.org/abs/2505.08223v1
- Date: Tue, 13 May 2025 04:50:29 GMT
- Title: Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation
- Authors: Dohyun Kim, Jayden Dongwoo Lee, Hyochoong Bang, Jungho Bae,
- Abstract summary: Multirotors play a significant role in diverse field robotics applications.<n>Previous approaches require prior knowledge of the multirotor model or struggle to adapt to new configurations.<n>We propose a novel hybrid RL-based FTC framework integrated with a transformer-based online adaptation module.
- Score: 0.13499500088995461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multirotors play a significant role in diverse field robotics applications but remain highly susceptible to actuator failures, leading to rapid instability and compromised mission reliability. While various fault-tolerant control (FTC) strategies using reinforcement learning (RL) have been widely explored, most previous approaches require prior knowledge of the multirotor model or struggle to adapt to new configurations. To address these limitations, we propose a novel hybrid RL-based FTC framework integrated with a transformer-based online adaptation module. Our framework leverages a transformer architecture to infer latent representations in real time, enabling adaptation to previously unseen system models without retraining. We evaluate our method in a PyBullet simulation under loss-of-effectiveness actuator faults, achieving a 95% success rate and a positional root mean square error (RMSE) of 0.129 m, outperforming existing adaptation methods with 86% success and an RMSE of 0.153 m. Further evaluations on quadrotors with varying configurations confirm the robustness of our framework across untrained dynamics. These results demonstrate the potential of our framework to enhance the adaptability and reliability of multirotors, enabling efficient fault management in dynamic and uncertain environments. Website is available at http://00dhkim.me/paper/rl-ftc
Related papers
- Model-based controller assisted domain randomization in deep reinforcement learning: application to nonlinear powertrain control [0.0]
This study proposes a new robust control approach using the framework of deep reinforcement learning (DRL)<n>The problem setup is modeled via the latent Markov decision process (LMDP), a set of vanilla MDPs, for a controlled system subject to uncertainties and nonlinearities.<n>Compared to traditional DRL-based controls, the proposed controller design is smarter in that we can achieve a high level of generalization ability.
arXiv Detail & Related papers (2025-04-28T12:09:07Z) - Reinforcement Learning for Machine Learning Model Deployment: Evaluating Multi-Armed Bandits in ML Ops Environments [0.0]
We investigate whether reinforcement learning (RL)-based model management can manage deployment decisions more effectively.<n>Our approach enables more adaptive production environments by continuously evaluating deployed models and rolling back underperforming ones in real-time.<n>Our findings suggest that RL-based model management can improve automation, reduce reliance on manual interventions, and mitigate risks associated with post-deployment model failures.
arXiv Detail & Related papers (2025-03-28T16:42:21Z) - Remembering Transformer for Continual Learning [9.879896956915598]
We propose Remembering Transformer, inspired by the brain's Complementary Learning Systems.
Remembering Transformer employs a mixture-of-adapters architecture and a generative model-based novelty detection mechanism.
We conducted extensive experiments, including ablation studies on the novelty detection mechanism and model capacity of the mixture-of-adapters.
arXiv Detail & Related papers (2024-04-11T07:22:14Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - FullLoRA-AT: Efficiently Boosting the Robustness of Pretrained Vision
Transformers [61.48709409150777]
Vision Transformer (ViT) model has gradually become mainstream in various computer vision tasks.
Existing large models tend to prioritize performance during training, potentially neglecting the robustness.
We develop a novel LNLoRA module, incorporating a learnable layer normalization before the conventional LoRA module.
We propose the FullLoRA-AT framework by integrating the learnable LNLoRA modules into all key components of ViT-based models.
arXiv Detail & Related papers (2024-01-03T14:08:39Z) - Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Low-level Pose Control of Tilting Multirotor for Wall Perching Tasks
Using Reinforcement Learning [2.5903488573278284]
We propose a novel reinforcement learning-based method to control a tilting multirotor on real-world applications.
Our proposed method shows robust controllability by overcoming the complex dynamics of tilting multirotors.
arXiv Detail & Related papers (2021-08-11T21:39:51Z) - Bayesian Transformer Language Models for Speech Recognition [59.235405107295655]
State-of-the-art neural language models (LMs) represented by Transformers are highly complex.
This paper proposes a full Bayesian learning framework for Transformer LM estimation.
arXiv Detail & Related papers (2021-02-09T10:55:27Z) - UPDeT: Universal Multi-agent Reinforcement Learning via Policy
Decoupling with Transformers [108.92194081987967]
We make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks.
Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy.
The proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable.
arXiv Detail & Related papers (2021-01-20T07:24:24Z) - Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR
Control in Active Distribution Networks [3.260913246106564]
We propose a novel two-stage deep reinforcement learning (DRL) method to improve the voltage profile by regulating inverter-based energy resources.
In the offline stage, a highly efficient adversarial reinforcement learning algorithm is developed to train an offline agent robust to the model mismatch.
In the sequential online stage, we transfer the offline agent safely as the online agent to perform continuous learning and controlling online with significantly improved safety and efficiency.
arXiv Detail & Related papers (2020-05-20T08:02:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.