Robust Adversarial Attacks Detection based on Explainable Deep
Reinforcement Learning For UAV Guidance and Planning
- URL: http://arxiv.org/abs/2206.02670v4
- Date: Tue, 20 Jun 2023 16:07:31 GMT
- Title: Robust Adversarial Attacks Detection based on Explainable Deep
Reinforcement Learning For UAV Guidance and Planning
- Authors: Thomas Hickling, Nabil Aouf and Phillippa Spencer
- Abstract summary: Adversarial attacks on Uncrewed Aerial Vehicle (UAV) agents operating in public are increasing.
Deep Learning (DL) approaches to control and guide these UAVs can be beneficial in terms of performance but can add concerns regarding the safety of those techniques and their vulnerability against adversarial attacks.
This paper proposes an innovative approach based on the explainability of DL methods to build an efficient detector that will protect these DL schemes and the UAVs adopting them from attacks.
- Score: 4.640835690336653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The dangers of adversarial attacks on Uncrewed Aerial Vehicle (UAV) agents
operating in public are increasing. Adopting AI-based techniques and, more
specifically, Deep Learning (DL) approaches to control and guide these UAVs can
be beneficial in terms of performance but can add concerns regarding the safety
of those techniques and their vulnerability against adversarial attacks.
Confusion in the agent's decision-making process caused by these attacks can
seriously affect the safety of the UAV. This paper proposes an innovative
approach based on the explainability of DL methods to build an efficient
detector that will protect these DL schemes and the UAVs adopting them from
attacks. The agent adopts a Deep Reinforcement Learning (DRL) scheme for
guidance and planning. The agent is trained with a Deep Deterministic Policy
Gradient (DDPG) with Prioritised Experience Replay (PER) DRL scheme that
utilises Artificial Potential Field (APF) to improve training times and
obstacle avoidance performance. A simulated environment for UAV explainable
DRL-based planning and guidance, including obstacles and adversarial attacks,
is built. The adversarial attacks are generated by the Basic Iterative Method
(BIM) algorithm and reduced obstacle course completion rates from 97\% to 35\%.
Two adversarial attack detectors are proposed to counter this reduction. The
first one is a Convolutional Neural Network Adversarial Detector (CNN-AD),
which achieves accuracy in the detection of 80\%. The second detector utilises
a Long Short Term Memory (LSTM) network. It achieves an accuracy of 91\% with
faster computing times compared to the CNN-AD, allowing for real-time
adversarial detection.
Related papers
- Charging Ahead: A Hierarchical Adversarial Framework for Counteracting Advanced Cyber Threats in EV Charging Stations [1.873794246359724]
Electric vehicles (EVs) provide false information to gain higher charging priority, potentially causing grid instability.
This paper introduces a hierarchical adversarial framework using DRL (HADRL), which effectively detects stealthy cyberattacks on EV charging stations.
arXiv Detail & Related papers (2024-07-04T08:23:03Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors [0.0]
An adaptive attack is one where the attacker is aware of the defenses and adapts their strategy accordingly.
Our proposed method leverages adversarial training to reinforce the ability to detect attacks, without compromising clean accuracy.
Experimental evaluations on the CIFAR-10 and SVHN datasets demonstrate that our proposed algorithm significantly improves a detector's ability to accurately identify adaptive adversarial attacks.
arXiv Detail & Related papers (2024-04-18T12:13:09Z) - MADRL-based UAVs Trajectory Design with Anti-Collision Mechanism in
Vehicular Networks [1.9662978733004604]
In upcoming 6G networks, unmanned aerial vehicles (UAVs) are expected to play a fundamental role by acting as mobile base stations.
One of the most challenging problems is the design of trajectories for multiple UAVs, cooperatively serving the same area.
We propose a rank-based binary masking approach to address these issues.
arXiv Detail & Related papers (2024-01-21T20:08:32Z) - PAD: Towards Principled Adversarial Malware Detection Against Evasion
Attacks [17.783849474913726]
We propose a new adversarial training framework, termed Principled Adversarial Malware Detection (PAD)
PAD lays on a learnable convex measurement that quantifies distribution-wise discrete perturbations to protect malware detectors from adversaries.
PAD can harden ML-based malware detection against 27 evasion attacks with detection accuracies greater than 83.45%.
It matches or outperforms many anti-malware scanners in VirusTotal against realistic adversarial malware.
arXiv Detail & Related papers (2023-02-22T12:24:49Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z) - Stealthy and Efficient Adversarial Attacks against Deep Reinforcement
Learning [30.46580767540506]
We introduce two novel adversarial attack techniques to emphstealthily and emphefficiently attack the Deep Reinforcement Learning agents.
The first technique is the emphcritical point attack: the adversary builds a model to predict the future environmental states and agent's actions, assesses the damage of each possible attack strategy, and selects the optimal one.
The second technique is the emphantagonist attack: the adversary automatically learns a domain-agnostic model to discover the critical moments of attacking the agent in an episode.
arXiv Detail & Related papers (2020-05-14T16:06:38Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.