ARMOR: Robust Reinforcement Learning-based Control for UAVs under Physical Attacks
- URL: http://arxiv.org/abs/2506.22423v1
- Date: Fri, 27 Jun 2025 17:46:33 GMT
- Title: ARMOR: Robust Reinforcement Learning-based Control for UAVs under Physical Attacks
- Authors: Pritam Dash, Ethan Chan, Nathan P. Lawrence, Karthik Pattabiraman,
- Abstract summary: Unmanned Aerial Vehicles (UAVs) depend on onboard sensors for perception, navigation, and control.<n>ArmOR is an attack-resilient, model-free reinforcement learning controller.<n>ArmOR learns a robust latent representation of the UAV's physical state via a two-stage training framework.
- Score: 6.362264393795084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unmanned Aerial Vehicles (UAVs) depend on onboard sensors for perception, navigation, and control. However, these sensors are susceptible to physical attacks, such as GPS spoofing, that can corrupt state estimates and lead to unsafe behavior. While reinforcement learning (RL) offers adaptive control capabilities, existing safe RL methods are ineffective against such attacks. We present ARMOR (Adaptive Robust Manipulation-Optimized State Representations), an attack-resilient, model-free RL controller that enables robust UAV operation under adversarial sensor manipulation. Instead of relying on raw sensor observations, ARMOR learns a robust latent representation of the UAV's physical state via a two-stage training framework. In the first stage, a teacher encoder, trained with privileged attack information, generates attack-aware latent states for RL policy training. In the second stage, a student encoder is trained via supervised learning to approximate the teacher's latent states using only historical sensor data, enabling real-world deployment without privileged information. Our experiments show that ARMOR outperforms conventional methods, ensuring UAV safety. Additionally, ARMOR improves generalization to unseen attacks and reduces training cost by eliminating the need for iterative adversarial training.
Related papers
- Robust Anti-Backdoor Instruction Tuning in LVLMs [53.766434746801366]
We introduce a lightweight, certified-agnostic defense framework for large visual language models (LVLMs)<n>Our framework finetunes only adapter modules and text embedding layers under instruction tuning.<n>Experiments against seven attacks on Flickr30k and MSCOCO demonstrate that ours reduces their attack success rate to nearly zero.
arXiv Detail & Related papers (2025-06-04T01:23:35Z) - Sensor Deprivation Attacks for Stealthy UAV Manipulation [51.9034385791934]
Unmanned Aerial Vehicles autonomously perform tasks with the use of state-of-the-art control algorithms.
In this work, we propose a multi-part.
Sensor Deprivation Attacks (SDAs), aiming to stealthily impact.
process control via sensor reconfiguration.
arXiv Detail & Related papers (2024-10-14T23:03:58Z) - VCAT: Vulnerability-aware and Curiosity-driven Adversarial Training for Enhancing Autonomous Vehicle Robustness [18.27802330689405]
Vulnerability-aware and Curiosity-driven Adversarial Training (VCAT) is a framework to train autonomous vehicles (AVs) against malicious attacks.
VCAT uses a surrogate network to fit the value function of the AV victim, providing dense information about the victim's inherent vulnerabilities.
In the victim defense training phase, the AV is trained in critical scenarios in which the pretrained attacker is positioned around the victim to generate attack behaviors.
arXiv Detail & Related papers (2024-09-19T14:53:02Z) - Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors [0.0]
An adaptive attack is one where the attacker is aware of the defenses and adapts their strategy accordingly.
Our proposed method leverages adversarial training to reinforce the ability to detect attacks, without compromising clean accuracy.
Experimental evaluations on the CIFAR-10 and SVHN datasets demonstrate that our proposed algorithm significantly improves a detector's ability to accurately identify adaptive adversarial attacks.
arXiv Detail & Related papers (2024-04-18T12:13:09Z) - Explainable and Safe Reinforcement Learning for Autonomous Air Mobility [13.038383326602764]
This article presents a novel deep reinforcement learning (DRL) controller to aid conflict resolution for autonomous free flight.
We design a fully explainable DRL framework wherein we: 1) decompose the coupled Q value learning model into a safety-awareness and efficiency (reach the target) one.
We also propose an adversarial attack strategy that can impose both safety-oriented and efficiency-oriented attacks.
arXiv Detail & Related papers (2022-11-24T08:47:06Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Robust Adversarial Attacks Detection based on Explainable Deep
Reinforcement Learning For UAV Guidance and Planning [4.640835690336653]
Adversarial attacks on Uncrewed Aerial Vehicle (UAV) agents operating in public are increasing.
Deep Learning (DL) approaches to control and guide these UAVs can be beneficial in terms of performance but can add concerns regarding the safety of those techniques and their vulnerability against adversarial attacks.
This paper proposes an innovative approach based on the explainability of DL methods to build an efficient detector that will protect these DL schemes and the UAVs adopting them from attacks.
arXiv Detail & Related papers (2022-06-06T15:16:10Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Unsupervised Intrusion Detection System for Unmanned Aerial Vehicle with
Less Labeling Effort [8.8519643723088]
Previous methods required a large labeling effort on the dataset, and the model could not identify attacks that were not trained before.
We propose an IDS with unsupervised learning, which lets the practitioner not to label every type of attack from the flight data.
We trained an autoencoder with the benign flight data only and checked the model provides a different reconstruction loss at the benign flight and the flight under attack.
arXiv Detail & Related papers (2020-11-01T15:52:22Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.