Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification
- URL: http://arxiv.org/abs/2301.13122v1
- Date: Mon, 30 Jan 2023 18:00:28 GMT
- Title: Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification
- Authors: Jo\~ao Vitorino, Isabel Pra\c{c}a, Eva Maia
- Abstract summary: The Internet of Things (IoT) faces tremendous security challenges.
The increasing threat posed by adversarial attacks restates the need for reliable defense strategies.
This work describes the types of constraints required for an adversarial cyber-attack example to be realistic.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Internet of Things (IoT) faces tremendous security challenges. Machine
learning models can be used to tackle the growing number of cyber-attack
variations targeting IoT systems, but the increasing threat posed by
adversarial attacks restates the need for reliable defense strategies. This
work describes the types of constraints required for an adversarial
cyber-attack example to be realistic and proposes a methodology for a
trustworthy adversarial robustness analysis with a realistic adversarial
evasion attack vector. The proposed methodology was used to evaluate three
supervised algorithms, Random Forest (RF), Extreme Gradient Boosting (XGB), and
Light Gradient Boosting Machine (LGBM), and one unsupervised algorithm,
Isolation Forest (IFOR). Constrained adversarial examples were generated with
the Adaptative Perturbation Pattern Method (A2PM), and evasion attacks were
performed against models created with regular and adversarial training. Even
though RF was the least affected in binary classification, XGB consistently
achieved the highest accuracy in multi-class classification. The obtained
results evidence the inherent susceptibility of tree-based algorithms and
ensembles to adversarial evasion attacks and demonstrates the benefits of
adversarial training and a security by design approach for a more robust IoT
network intrusion detection.
Related papers
- Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks [0.0]
Adversarial attacks pose significant threats to the robustness of deep learning models in image classification.
This paper explores and refines defense mechanisms against these attacks to enhance the resilience of neural networks.
arXiv Detail & Related papers (2024-08-20T02:00:02Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Interpretability is a Kind of Safety: An Interpreter-based Ensemble for
Adversary Defense [28.398901783858005]
We propose an interpreter-based ensemble framework called X-Ensemble for robust defense adversary.
X-Ensemble employs the Random Forests (RF) model to combine sub-detectors into an ensemble detector for adversarial hybrid attacks defense.
arXiv Detail & Related papers (2023-04-14T04:32:06Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Adaptative Perturbation Patterns: Realistic Adversarial Learning for
Robust NIDS [0.3867363075280543]
Adrial attacks pose a major threat to machine learning and to the systems that rely on it.
This work introduces the Adaptative Perturbation Pattern Method (A2PM) to fulfill these constraints in a gray-box setting.
A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations.
arXiv Detail & Related papers (2022-03-08T17:52:09Z) - A Comparative Analysis of Machine Learning Techniques for IoT Intrusion
Detection [0.0]
This paper presents a comparative analysis of supervised, unsupervised and reinforcement learning techniques on nine malware captures of the IoT-23 dataset.
The developed models consisted of Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Isolation Forest (iForest), Local Outlier Factor (LOF) and a Deep Reinforcement Learning (DRL) model based on a Double Deep Q-Network (DDQN)
arXiv Detail & Related papers (2021-11-25T16:14:54Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.