Constrained Network Adversarial Attacks: Validity, Robustness, and Transferability
- URL: http://arxiv.org/abs/2505.01328v1
- Date: Fri, 02 May 2025 15:01:42 GMT
- Title: Constrained Network Adversarial Attacks: Validity, Robustness, and Transferability
- Authors: Anass Grini, Oumaima Taheri, Btissam El Khamlichi, Amal El Fallah-Seghrouchni,
- Abstract summary: Research reveals a critical flaw in existing adversarial attack methodologies.<n>We show that the frequent violation of domain-specific constraints, inherent to IoT and network traffic, leads to up to 80.3% of adversarial examples being invalid.<n>This work underscores the importance of considering both domain constraints and model architecture when evaluating and designing robust ML/DL models for security-critical IoT and network applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While machine learning has significantly advanced Network Intrusion Detection Systems (NIDS), particularly within IoT environments where devices generate large volumes of data and are increasingly susceptible to cyber threats, these models remain vulnerable to adversarial attacks. Our research reveals a critical flaw in existing adversarial attack methodologies: the frequent violation of domain-specific constraints, such as numerical and categorical limits, inherent to IoT and network traffic. This leads to up to 80.3% of adversarial examples being invalid, significantly overstating real-world vulnerabilities. These invalid examples, though effective in fooling models, do not represent feasible attacks within practical IoT deployments. Consequently, relying on these results can mislead resource allocation for defense, inflating the perceived susceptibility of IoT-enabled NIDS models to adversarial manipulation. Furthermore, we demonstrate that simpler surrogate models like Multi-Layer Perceptron (MLP) generate more valid adversarial examples compared to complex architectures such as CNNs and LSTMs. Using the MLP as a surrogate, we analyze the transferability of adversarial severity to other ML/DL models commonly used in IoT contexts. This work underscores the importance of considering both domain constraints and model architecture when evaluating and designing robust ML/DL models for security-critical IoT and network applications.
Related papers
- REAL-IoT: Characterizing GNN Intrusion Detection Robustness under Practical Adversarial Attack [5.881825061973424]
Graph Neural Network (GNN)-based network intrusion detection systems (NIDS) are often evaluated on single datasets.<n>We propose textbfREAL-IoT, a comprehensive framework for robustness evaluation of GNN-based NIDS in IoT environments.
arXiv Detail & Related papers (2025-07-14T22:10:08Z) - Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning [97.49610356913874]
We propose a robust test-time prompt tuning (R-TPT) for vision-language models (VLMs)<n>R-TPT mitigates the impact of adversarial attacks during the inference stage.<n>We introduce a plug-and-play reliability-based weighted ensembling strategy to strengthen the defense.
arXiv Detail & Related papers (2025-04-15T13:49:31Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)<n>To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Development of an Edge Resilient ML Ensemble to Tolerate ICS Adversarial Attacks [0.9437165725355702]
We build a resilient edge machine learning architecture that is designed to withstand adversarial attacks.
The reML is based on the Resilient DDDAS paradigm, Moving Target Defense (MTD) theory, and TinyML.
The proposed approach is power-efficient and privacy-preserving and, therefore, can be deployed on power-constrained devices to enhance ICS security.
arXiv Detail & Related papers (2024-09-26T19:37:37Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network
Intrusion Detection [0.0]
This paper consolidates and summarizes the state-of-the-art adversarial learning approaches that can generate realistic examples.
It defines the fundamental properties that are required for an adversarial example to be realistic.
It provides guidelines for researchers to ensure that their future experiments are adequate for a real communication network.
arXiv Detail & Related papers (2023-08-13T17:23:36Z) - Common Knowledge Learning for Generating Transferable Adversarial
Examples [60.1287733223249]
This paper focuses on an important type of black-box attacks, where the adversary generates adversarial examples by a substitute (source) model.
Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures.
We propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples.
arXiv Detail & Related papers (2023-07-01T09:07:12Z) - Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning [1.6574413179773757]
adversarial attacks aim to trick ML models into producing faulty predictions.<n> adversarial attacks can compromise ML-based NIDSs.<n>Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks.
arXiv Detail & Related papers (2023-06-08T18:32:08Z) - Adversarial Examples in Deep Learning for Multivariate Time Series
Regression [0.0]
This work explores the vulnerability of deep learning (DL) regression models to adversarial time series examples.
We craft adversarial time series examples for CNN, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)
The obtained results show that all the evaluated DL regression models are vulnerable to adversarial attacks, transferable, and thus can lead to catastrophic consequences.
arXiv Detail & Related papers (2020-09-24T19:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.