RL and Fingerprinting to Select Moving Target Defense Mechanisms for
Zero-day Attacks in IoT
- URL: http://arxiv.org/abs/2212.14647v1
- Date: Fri, 30 Dec 2022 12:15:59 GMT
- Title: RL and Fingerprinting to Select Moving Target Defense Mechanisms for
Zero-day Attacks in IoT
- Authors: Alberto Huertas Celdr\'an, Pedro Miguel S\'anchez S\'anchez, Jan von
der Assen, Timo Schenk, G\'er\^ome Bovet, Gregorio Mart\'inez P\'erez,
Burkhard Stiller
- Abstract summary: Cybercriminals are moving towards zero-day attacks affecting resource-constrained devices.
Moving Target Defense is a promising approach to mitigate attacks by dynamically altering target attack surfaces.
This paper proposes an online RL-based framework to learn the correct MTD mechanisms mitigating heterogeneous zero-day attacks in SBC.
- Score: 0.5172201569251684
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cybercriminals are moving towards zero-day attacks affecting
resource-constrained devices such as single-board computers (SBC). Assuming
that perfect security is unrealistic, Moving Target Defense (MTD) is a
promising approach to mitigate attacks by dynamically altering target attack
surfaces. Still, selecting suitable MTD techniques for zero-day attacks is an
open challenge. Reinforcement Learning (RL) could be an effective approach to
optimize the MTD selection through trial and error, but the literature fails
when i) evaluating the performance of RL and MTD solutions in real-world
scenarios, ii) studying whether behavioral fingerprinting is suitable for
representing SBC's states, and iii) calculating the consumption of resources in
SBC. To improve these limitations, the work at hand proposes an online RL-based
framework to learn the correct MTD mechanisms mitigating heterogeneous zero-day
attacks in SBC. The framework considers behavioral fingerprinting to represent
SBCs' states and RL to learn MTD techniques that mitigate each malicious state.
It has been deployed on a real IoT crowdsensing scenario with a Raspberry Pi
acting as a spectrum sensor. More in detail, the Raspberry Pi has been infected
with different samples of command and control malware, rootkits, and ransomware
to later select between four existing MTD techniques. A set of experiments
demonstrated the suitability of the framework to learn proper MTD techniques
mitigating all attacks (except a harmfulness rootkit) while consuming <1 MB of
storage and utilizing <55% CPU and <80% RAM.
Related papers
- Leveraging MTD to Mitigate Poisoning Attacks in Decentralized FL with Non-IID Data [9.715501137911552]
This paper proposes a framework that employs the Moving Target Defense (MTD) approach to bolster the robustness of DFL models.
By continuously modifying the attack surface of the DFL system, this framework aims to mitigate poisoning attacks effectively.
arXiv Detail & Related papers (2024-09-28T10:09:37Z) - MTDSense: AI-Based Fingerprinting of Moving Target Defense Techniques in Software-Defined Networking [10.55674383602625]
Moving target defenses (MTD) are proactive security techniques that enhance network security by confusing the attacker and limiting their attack window.
We propose a novel approach named MTDSense, which can determine when the MTD has been triggered using the footprints the MTD operation leaves in the network traffic.
An attacker can use this information to maximize their attack window and tailor their attacks, which has been shown to significantly reduce the effectiveness of MTD.
arXiv Detail & Related papers (2024-08-07T13:26:00Z) - Defense against Joint Poison and Evasion Attacks: A Case Study of DERMS [2.632261166782093]
We propose the first framework of IDS that is robust against joint poisoning and evasion attacks.
We verify the robustness of our method on the IEEE-13 bus feeder model against a diverse set of poisoning and evasion attack scenarios.
arXiv Detail & Related papers (2024-05-05T16:24:30Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation [6.22761577977019]
CyberForce is a framework that combines Federated and Reinforcement Learning (FRL) to learn suitable MTD techniques for mitigating zero-day attacks.
Experiments show that CyberForce learns the MTD technique mitigating each attack faster than existing RL-based centralized approaches.
Different aggregation algorithms used during the agent learning process provide CyberForce with notable robustness to malicious attacks.
arXiv Detail & Related papers (2023-08-11T07:25:12Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Adversarial attacks and defenses on ML- and hardware-based IoT device
fingerprinting and identification [0.0]
This work proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification.
Previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices.
adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks.
arXiv Detail & Related papers (2022-12-30T13:11:35Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.