A Novel and Practical Universal Adversarial Perturbations against Deep Reinforcement Learning based Intrusion Detection Systems
- URL: http://arxiv.org/abs/2511.18223v1
- Date: Sat, 22 Nov 2025 23:52:01 GMT
- Title: A Novel and Practical Universal Adversarial Perturbations against Deep Reinforcement Learning based Intrusion Detection Systems
- Authors: H. Zhang, L. Zhang, G. Epiphaniou, C. Maple,
- Abstract summary: Intrusion Detection Systems (IDS) play a vital role in defending modern cyber physical systems against increasingly sophisticated cyber threats.<n>Recent studies reveal their vulnerability to adversarial attacks, including Universal Adversarial Perturbations (UAPs)<n>We propose a novel UAP attack against Deep Reinforcement Learning (DRL)-based IDS under the domain-specific constraints derived from network data rules and feature relationships.
- Score: 1.2268699896617543
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Intrusion Detection Systems (IDS) play a vital role in defending modern cyber physical systems against increasingly sophisticated cyber threats. Deep Reinforcement Learning-based IDS, have shown promise due to their adaptive and generalization capabilities. However, recent studies reveal their vulnerability to adversarial attacks, including Universal Adversarial Perturbations (UAPs), which can deceive models with a single, input-agnostic perturbation. In this work, we propose a novel UAP attack against Deep Reinforcement Learning (DRL)-based IDS under the domain-specific constraints derived from network data rules and feature relationships. To the best of our knowledge, there is no existing study that has explored UAP generation for the DRL-based IDS. In addition, this is the first work that focuses on developing a UAP against a DRL-based IDS under realistic domain constraints based on not only the basic domain rules but also mathematical relations between the features. Furthermore, we enhance the evasion performance of the proposed UAP, by introducing a customized loss function based on the Pearson Correlation Coefficient, and we denote it as Customized UAP. To the best of our knowledge, this is also the first work using the PCC value in the UAP generation, even in the broader context. Four additional established UAP baselines are implemented for a comprehensive comparison. Experimental results demonstrate that our proposed Customized UAP outperforms two input-dependent attacks including Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and four UAP baselines, highlighting its effectiveness for real-world adversarial scenarios.
Related papers
- Spoofing-aware Prompt Learning for Unified Physical-Digital Facial Attack Detection [28.74960061024677]
Real-world face recognition systems are vulnerable to both physical presentation attacks (PAs) and digital forgery attacks (DFs)<n>We propose a Spoofing-aware Prompt Learning for Unified Attack Detection (SPL-UAD) framework, which decouples optimization branches for physical and digital attacks in the prompt space.<n>Experiments on the large-scale UniAttackDataPlus dataset demonstrate that the proposed method achieves significant performance improvements in unified attack detection tasks.
arXiv Detail & Related papers (2025-12-06T09:34:39Z) - SPEAR++: Scaling Gradient Inversion via Sparsely-Used Dictionary Learning [48.41770886055744]
Federated Learning has seen an increased deployment in real-world scenarios recently.<n>The introduction of the so-called gradient inversion attacks has challenged its privacy-preserving properties.<n>We introduce SPEAR, which is based on a theoretical analysis of the gradients of linear layers with ReLU activations.<n>Our new attack, SPEAR++, retains all desirable properties of SPEAR, such as robustness to DP noise and FedAvg aggregation.
arXiv Detail & Related papers (2025-10-28T09:06:19Z) - Differential Privacy: Gradient Leakage Attacks in Federated Learning Environments [0.6850683267295249]
Federated Learning (FL) allows for the training of Machine Learning models in a collaborative manner without the need to share sensitive data.<n>It remains vulnerable to Gradient Leakage Attacks (GLAs), which can reveal private information from the shared model updates.<n>We investigate the effectiveness of Differential Privacy mechanisms as defenses against GLAs.
arXiv Detail & Related papers (2025-10-27T23:33:21Z) - Unsupervised Fingerphoto Presentation Attack Detection With Diffusion Models [8.979820109339286]
Smartphone-based contactless fingerphoto authentication has become a reliable alternative to traditional contact-based fingerprint biometric systems.
Despite its convenience, fingerprint authentication through fingerphotos is more vulnerable to presentation attacks.
We propose a novel unsupervised approach based on a state-of-the-art deep-learning-based diffusion model, the Denoising Probabilistic Diffusion Model (DDPM)
The proposed approach detects Presentation Attacks (PA) by calculating the reconstruction similarity between the input and output pairs of the DDPM.
arXiv Detail & Related papers (2024-09-27T11:07:48Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - DAD++: Improved Data-free Test Time Adversarial Defense [12.606555446261668]
We propose a test time Data-free Adversarial Defense (DAD) containing detection and correction frameworks.
We conduct a wide range of experiments and ablations on several datasets and network architectures to show the efficacy of our proposed approach.
Our DAD++ gives an impressive performance against various adversarial attacks with a minimal drop in clean accuracy.
arXiv Detail & Related papers (2023-09-10T20:39:53Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Taming Self-Supervised Learning for Presentation Attack Detection:
De-Folding and De-Mixing [42.733666815035534]
Biometric systems are vulnerable to Presentation Attacks performed using various Presentation Attack Instruments (PAIs)
We propose a self-supervised learning-based method, denoted as DF-DM.
DF-DM is based on a global-local view coupled with De-Folding and De-Mixing to derive the task-specific representation for PAD.
arXiv Detail & Related papers (2021-09-09T08:38:17Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.