Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled
IoTs: An Anticipatory Study
- URL: http://arxiv.org/abs/2303.11745v1
- Date: Tue, 21 Mar 2023 11:12:17 GMT
- Title: Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled
IoTs: An Anticipatory Study
- Authors: Mohamed Amine Ferrag and Burak Kantarci and Lucas C. Cordeiro and
Merouane Debbah and Kim-Kwang Raymond Choo
- Abstract summary: Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments.
We propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments.
- Score: 37.97034388920841
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated edge learning can be essential in supporting privacy-preserving,
artificial intelligence (AI)-enabled activities in digital twin 6G-enabled
Internet of Things (IoT) environments. However, we need to also consider the
potential of attacks targeting the underlying AI systems (e.g., adversaries
seek to corrupt data on the IoT devices during local updates or corrupt the
model updates); hence, in this article, we propose an anticipatory study for
poisoning attacks in federated edge learning for digital twin 6G-enabled IoT
environments. Specifically, we study the influence of adversaries on the
training and development of federated learning models in digital twin
6G-enabled IoT environments. We demonstrate that attackers can carry out
poisoning attacks in two different learning settings, namely: centralized
learning and federated learning, and successful attacks can severely reduce the
model's accuracy. We comprehensively evaluate the attacks on a new cyber
security dataset designed for IoT applications with three deep neural networks
under the non-independent and identically distributed (Non-IID) data and the
independent and identically distributed (IID) data. The poisoning attacks, on
an attack classification problem, can lead to a decrease in accuracy from
94.93% to 85.98% with IID data and from 94.18% to 30.04% with Non-IID.
Related papers
- Strengthening Network Intrusion Detection in IoT Environments with Self-Supervised Learning and Few Shot Learning [1.0678175996321808]
The Internet of Things (IoT) has been introduced as a breakthrough technology that integrates intelligence into everyday objects.
As the IoT networks grow and expand, they become more susceptible to cybersecurity attacks.
This paper introduces a novel intrusion detection approach designed to address these challenges.
arXiv Detail & Related papers (2024-06-04T06:30:22Z) - A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats [3.560574387648533]
We propose a one-class classification-driven IDS system structured on two tiers.
The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown.
This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks.
arXiv Detail & Related papers (2024-03-17T12:26:30Z) - Enhancing IoT Security Against DDoS Attacks through Federated Learning [0.0]
Internet of Things (IoT) has ushered in transformative connectivity between physical devices and the digital realm.
Traditional DDoS mitigation approaches are ill-equipped to handle the intricacies of IoT ecosystems.
This paper introduces an innovative strategy to bolster the security of IoT networks against DDoS attacks by harnessing the power of Federated Learning.
arXiv Detail & Related papers (2024-03-16T16:45:28Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Unsupervised Ensemble Based Deep Learning Approach for Attack Detection
in IoT Network [0.0]
Internet of Things (IoT) has altered living by controlling devices/things over the Internet.
To bring down the IoT network, attackers can utilise these devices to conduct a variety of network attacks.
In this paper, we have developed an unsupervised ensemble learning model that is able to detect new or unknown attacks in an IoT network from an unlabelled dataset.
arXiv Detail & Related papers (2022-07-16T11:12:32Z) - AdIoTack: Quantifying and Refining Resilience of Decision Tree Ensemble
Inference Models against Adversarial Volumetric Attacks on IoT Networks [1.1172382217477126]
We present AdIoTack, a system that highlights vulnerabilities of decision trees against adversarial attacks.
To assess the model for the worst-case scenario, AdIoTack performs white-box adversarial learning to launch successful volumetric attacks.
We demonstrate how the model detects all non-adversarial volumetric attacks on IoT devices while missing many adversarial ones.
arXiv Detail & Related papers (2022-03-18T08:18:03Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.