Adversarial attacks and defenses on ML- and hardware-based IoT device
fingerprinting and identification
- URL: http://arxiv.org/abs/2212.14677v1
- Date: Fri, 30 Dec 2022 13:11:35 GMT
- Title: Adversarial attacks and defenses on ML- and hardware-based IoT device
fingerprinting and identification
- Authors: Pedro Miguel S\'anchez S\'anchez, Alberto Huertas Celdr\'an,
G\'er\^ome Bovet, Gregorio Mart\'inez P\'erez
- Abstract summary: This work proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification.
Previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices.
adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last years, the number of IoT devices deployed has suffered an
undoubted explosion, reaching the scale of billions. However, some new
cybersecurity issues have appeared together with this development. Some of
these issues are the deployment of unauthorized devices, malicious code
modification, malware deployment, or vulnerability exploitation. This fact has
motivated the requirement for new device identification mechanisms based on
behavior monitoring. Besides, these solutions have recently leveraged Machine
and Deep Learning techniques due to the advances in this field and the increase
in processing capabilities. In contrast, attackers do not stay stalled and have
developed adversarial attacks focused on context modification and ML/DL
evaluation evasion applied to IoT device identification solutions. This work
explores the performance of hardware behavior-based individual device
identification, how it is affected by possible context- and ML/DL-focused
attacks, and how its resilience can be improved using defense techniques. In
this sense, it proposes an LSTM-CNN architecture based on hardware performance
behavior for individual device identification. Then, previous techniques have
been compared with the proposed architecture using a hardware performance
dataset collected from 45 Raspberry Pi devices running identical software. The
LSTM-CNN improves previous solutions achieving a +0.96 average F1-Score and 0.8
minimum TPR for all devices. Afterward, context- and ML/DL-focused adversarial
attacks were applied against the previous model to test its robustness. A
temperature-based context attack was not able to disrupt the identification.
However, some ML/DL state-of-the-art evasion attacks were successful. Finally,
adversarial training and model distillation defense techniques are selected to
improve the model resilience to evasion attacks, without degrading its
performance.
Related papers
- MIBench: A Comprehensive Benchmark for Model Inversion Attack and Defense [43.71365087852274]
Model Inversion (MI) attacks aim at leveraging the output information of target models to reconstruct privacy-sensitive training data.
The lack of a comprehensive, aligned, and reliable benchmark has emerged as a formidable challenge.
We introduce the first practical benchmark for model inversion attacks and defenses to address this critical gap, which is named textitMIBench
arXiv Detail & Related papers (2024-10-07T16:13:49Z) - FedMADE: Robust Federated Learning for Intrusion Detection in IoT Networks Using a Dynamic Aggregation Method [7.842334649864372]
Internet of Things (IoT) devices across multiple sectors has escalated serious network security concerns.
Traditional Machine Learning (ML)-based Intrusion Detection Systems (IDSs) for cyber-attack classification require data transmission from IoT devices to a centralized server for traffic analysis, raising severe privacy concerns.
We introduce FedMADE, a novel dynamic aggregation method, which clusters devices by their traffic patterns and aggregates local models based on their contributions towards overall performance.
arXiv Detail & Related papers (2024-08-13T18:42:34Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Real-time Threat Detection Strategies for Resource-constrained Devices [1.4815508281465273]
We present an end-to-end process designed to effectively address DNS-tunneling attacks in a router.
We demonstrate that utilizing stateless features for training the ML model, along with features chosen to be independent of the network configuration, leads to highly accurate results.
The deployment of this carefully crafted model, optimized for embedded devices across diverse environments, resulted in high DNS-tunneling attack detection with minimal latency.
arXiv Detail & Related papers (2024-03-22T10:02:54Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Crafter: Facial Feature Crafting against Inversion-based Identity Theft
on Deep Models [45.398313126020284]
A typical application is to run machine learning services on facial images collected from different individuals.
To prevent identity theft, conventional methods rely on an adversarial game-based approach to shed the identity information from the feature.
We propose Crafter, a feature crafting mechanism deployed at the edge, to protect the identity information from adaptive model attacks.
arXiv Detail & Related papers (2024-01-14T05:06:42Z) - CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation [6.22761577977019]
CyberForce is a framework that combines Federated and Reinforcement Learning (FRL) to learn suitable MTD techniques for mitigating zero-day attacks.
Experiments show that CyberForce learns the MTD technique mitigating each attack faster than existing RL-based centralized approaches.
Different aggregation algorithms used during the agent learning process provide CyberForce with notable robustness to malicious attacks.
arXiv Detail & Related papers (2023-08-11T07:25:12Z) - Discretization-based ensemble model for robust learning in IoT [8.33619265970446]
We propose a discretization-based ensemble stacking technique to improve the security of machine learning models.
We evaluate the performance of different ML-based IoT device identification models against white box and black box attacks.
arXiv Detail & Related papers (2023-07-18T03:48:27Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Robust Federated Learning for execution time-based device model
identification under label-flipping attack [0.0]
Device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched.
Several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques.
New approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup.
arXiv Detail & Related papers (2021-11-29T10:27:14Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.