An Efficient Security Model for Industrial Internet of Things (IIoT) System Based on Machine Learning Principles
- URL: http://arxiv.org/abs/2502.06502v1
- Date: Mon, 10 Feb 2025 14:20:13 GMT
- Title: An Efficient Security Model for Industrial Internet of Things (IIoT) System Based on Machine Learning Principles
- Authors: Sahar L. Qaddoori, Qutaiba I. Ali,
- Abstract summary: This paper presents a security paradigm for edge devices to defend against various internal and external threats.
The proposed security paradigm is found to be effective against various internal and external threats and can be applied to a low-cost single-board computer.
- Score: 0.0
- License:
- Abstract: This paper presents a security paradigm for edge devices to defend against various internal and external threats. The first section of the manuscript proposes employing machine learning models to identify MQTT-based (Message Queue Telemetry Transport) attacks using the Intrusion Detection and Prevention System (IDPS) for edge nodes. Because the Machine Learning (ML) model cannot be trained directly on low-performance platforms (such as edge devices),a new methodology for updating ML models is proposed to provide a tradeoff between the model performance and the computational complexity. The proposed methodology involves training the model on a high-performance computing platform and then installing the trained model as a detection engine on low-performance platforms (such as the edge node of the edge layer) to identify new attacks. Multiple security techniques have been employed in the second half of the manuscript to verify that the exchanged trained model and the exchanged data files are valid and undiscoverable (information authenticity and privacy) and that the source (such as a fog node or edge device) is indeed what it it claimed to be (source authentication and message integrity). Finally, the proposed security paradigm is found to be effective against various internal and external threats and can be applied to a low-cost single-board computer (SBC).
Related papers
- Identify Backdoored Model in Federated Learning via Individual Unlearning [7.200910949076064]
Backdoor attacks present a significant threat to the robustness of Federated Learning (FL)
We propose MASA, a method that utilizes individual unlearning on local models to identify malicious models in FL.
To the best of our knowledge, this is the first work to leverage machine unlearning to identify malicious models in FL.
arXiv Detail & Related papers (2024-11-01T21:19:47Z) - CAMH: Advancing Model Hijacking Attack in Machine Learning [44.58778557522968]
Category-Agnostic Model Hijacking (CAMH) is a novel model hijacking attack method.
It addresses the challenges of class number mismatch, data distribution divergence, and performance balance between the original and hijacking tasks.
We demonstrate its potent attack effectiveness while ensuring minimal degradation in the performance of the original task.
arXiv Detail & Related papers (2024-08-25T07:03:01Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - Discretization-based ensemble model for robust learning in IoT [8.33619265970446]
We propose a discretization-based ensemble stacking technique to improve the security of machine learning models.
We evaluate the performance of different ML-based IoT device identification models against white box and black box attacks.
arXiv Detail & Related papers (2023-07-18T03:48:27Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - A Generative Framework for Low-Cost Result Validation of Machine Learning-as-a-Service Inference [4.478182379059458]
Fides is a novel framework for real-time integrity validation of ML-as-a-Service (ML) inference.
Fides features a client-side attack detection model that uses statistical analysis and divergence measurements to identify, with a high likelihood, if the service model is under attack.
We devised a generative adversarial network framework for training the attack detection and re-classification models.
arXiv Detail & Related papers (2023-03-31T19:17:30Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - Learning to Detect: A Data-driven Approach for Network Intrusion
Detection [17.288512506016612]
We perform a comprehensive study on NSL-KDD, a network traffic dataset, by visualizing patterns and employing different learning-based models to detect cyber attacks.
Unlike previous shallow learning and deep learning models that use the single learning model approach for intrusion detection, we adopt a hierarchy strategy.
We demonstrate the advantage of the unsupervised representation learning model in binary intrusion detection tasks.
arXiv Detail & Related papers (2021-08-18T21:19:26Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.