AdIoTack: Quantifying and Refining Resilience of Decision Tree Ensemble
Inference Models against Adversarial Volumetric Attacks on IoT Networks
- URL: http://arxiv.org/abs/2203.09792v1
- Date: Fri, 18 Mar 2022 08:18:03 GMT
- Title: AdIoTack: Quantifying and Refining Resilience of Decision Tree Ensemble
Inference Models against Adversarial Volumetric Attacks on IoT Networks
- Authors: Arman Pashamokhtari and Gustavo Batista and Hassan Habibi Gharakheili
- Abstract summary: We present AdIoTack, a system that highlights vulnerabilities of decision trees against adversarial attacks.
To assess the model for the worst-case scenario, AdIoTack performs white-box adversarial learning to launch successful volumetric attacks.
We demonstrate how the model detects all non-adversarial volumetric attacks on IoT devices while missing many adversarial ones.
- Score: 1.1172382217477126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning-based techniques have shown success in cyber intelligence.
However, they are increasingly becoming targets of sophisticated data-driven
adversarial attacks resulting in misprediction, eroding their ability to detect
threats on network devices. In this paper, we present AdIoTack, a system that
highlights vulnerabilities of decision trees against adversarial attacks,
helping cybersecurity teams quantify and refine the resilience of their trained
models for monitoring IoT networks. To assess the model for the worst-case
scenario, AdIoTack performs white-box adversarial learning to launch successful
volumetric attacks that decision tree ensemble models cannot flag. Our first
contribution is to develop a white-box algorithm that takes a trained decision
tree ensemble model and the profile of an intended network-based attack on a
victim class as inputs. It then automatically generates recipes that specify
certain packets on top of the indented attack packets (less than 15% overhead)
that together can bypass the inference model unnoticed. We ensure that the
generated attack instances are feasible for launching on IP networks and
effective in their volumetric impact. Our second contribution develops a method
to monitor the network behavior of connected devices actively, inject
adversarial traffic (when feasible) on behalf of a victim IoT device, and
successfully launch the intended attack. Our third contribution prototypes
AdIoTack and validates its efficacy on a testbed consisting of a handful of
real IoT devices monitored by a trained inference model. We demonstrate how the
model detects all non-adversarial volumetric attacks on IoT devices while
missing many adversarial ones. The fourth contribution develops systematic
methods for applying patches to trained decision tree ensemble models,
improving their resilience against adversarial volumetric attacks.
Related papers
- Discretization-based ensemble model for robust learning in IoT [8.33619265970446]
We propose a discretization-based ensemble stacking technique to improve the security of machine learning models.
We evaluate the performance of different ML-based IoT device identification models against white box and black box attacks.
arXiv Detail & Related papers (2023-07-18T03:48:27Z) - Detecting Anomalous Microflows in IoT Volumetric Attacks via Dynamic
Monitoring of MUD Activity [1.294952045574009]
Anomaly-based detection methods are promising in finding new attacks.
There are certain practical challenges like false-positive alarms, hard to explain, and difficult to scale cost-effectively.
In this paper, we use SDN to enforce and monitor the expected behaviors of each IoT device.
arXiv Detail & Related papers (2023-04-11T05:17:51Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification [0.0]
The Internet of Things (IoT) faces tremendous security challenges.
The increasing threat posed by adversarial attacks restates the need for reliable defense strategies.
This work describes the types of constraints required for an adversarial cyber-attack example to be realistic.
arXiv Detail & Related papers (2023-01-30T18:00:28Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Modelling DDoS Attacks in IoT Networks using Machine Learning [21.812642970826563]
TCP-specific attacks are one of the most plausible tools that attackers can use on Cyber-Physical Systems.
This study compares the effectiveness of supervised, unsupervised, and semi-supervised machine learning algorithms for detecting DDoS attacks in CPS-IoT.
arXiv Detail & Related papers (2021-12-10T12:09:26Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Adversarial Attacks on Machine Learning Cybersecurity Defences in
Industrial Control Systems [2.86989372262348]
This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples.
It also explores how such samples can support the robustness of supervised models using adversarial training.
Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present.
arXiv Detail & Related papers (2020-04-10T12:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.