Federated Learning for Malware Detection in IoT Devices
- URL: http://arxiv.org/abs/2104.09994v1
- Date: Thu, 15 Apr 2021 13:14:22 GMT
- Title: Federated Learning for Malware Detection in IoT Devices
- Authors: Valerian Rey, Pedro Miguel S\'anchez S\'anchez, Alberto Huertas
Celdr\'an, G\'er\^ome Bovet, Martin Jaggi
- Abstract summary: A framework that uses federated learning to detect malware affecting IoT devices is presented.
N-BaIoT, a dataset modeling network traffic of several real IoT devices while affected by malware, has been used to evaluate the proposed framework.
- Score: 35.00570367521957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work investigates the possibilities enabled by federated learning
concerning IoT malware detection and studies security issues inherent to this
new learning paradigm. In this context, a framework that uses federated
learning to detect malware affecting IoT devices is presented. N-BaIoT, a
dataset modeling network traffic of several real IoT devices while affected by
malware, has been used to evaluate the proposed framework. Both supervised and
unsupervised federated models (multi-layer perceptron and autoencoder) able to
detect malware affecting seen and unseen IoT devices of N-BaIoT have been
trained and evaluated. Furthermore, their performance has been compared to two
traditional approaches. The first one lets each participant locally train a
model using only its own data, while the second consists of making the
participants share their data with a central entity in charge of training a
global model. This comparison has shown that the use of more diverse and large
data, as done in the federated and centralized methods, has a considerable
positive impact on the model performance. Besides, the federated models, while
preserving the participant's privacy, show similar results as the centralized
ones. As an additional contribution and to measure the robustness of the
federated approach, an adversarial setup with several malicious participants
poisoning the federated model has been considered. The baseline model
aggregation averaging step used in most federated learning algorithms appears
highly vulnerable to different attacks, even with a single adversary. The
performance of other model aggregation functions acting as countermeasures is
thus evaluated under the same attack scenarios. These functions provide a
significant improvement against malicious participants, but more efforts are
still needed to make federated approaches robust.
Related papers
- FedMADE: Robust Federated Learning for Intrusion Detection in IoT Networks Using a Dynamic Aggregation Method [7.842334649864372]
Internet of Things (IoT) devices across multiple sectors has escalated serious network security concerns.
Traditional Machine Learning (ML)-based Intrusion Detection Systems (IDSs) for cyber-attack classification require data transmission from IoT devices to a centralized server for traffic analysis, raising severe privacy concerns.
We introduce FedMADE, a novel dynamic aggregation method, which clusters devices by their traffic patterns and aggregates local models based on their contributions towards overall performance.
arXiv Detail & Related papers (2024-08-13T18:42:34Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Enhancing Intrusion Detection In Internet Of Vehicles Through Federated
Learning [0.0]
Federated learning allows multiple parties to collaborate and learn a shared model without sharing their raw data.
Our paper proposes a federated learning framework for intrusion detection in Internet of Vehicles (IOVs) using the CIC-IDS 2017 dataset.
arXiv Detail & Related papers (2023-11-23T04:04:20Z) - Discretization-based ensemble model for robust learning in IoT [8.33619265970446]
We propose a discretization-based ensemble stacking technique to improve the security of machine learning models.
We evaluate the performance of different ML-based IoT device identification models against white box and black box attacks.
arXiv Detail & Related papers (2023-07-18T03:48:27Z) - GowFed -- A novel Federated Network Intrusion Detection System [0.15469452301122172]
This work presents GowFed, a novel network threat detection system that combines the usage of Gower Dissimilarity matrices and Federated averaging.
Different approaches of GowFed have been developed based on state-of the-art knowledge: (1) a vanilla version; and (2) a version instrumented with an attention mechanism.
Overall, GowFed intends to be the first stepping stone towards the combined usage of Federated Learning and Gower Dissimilarity matrices to detect network threats in industrial-level networks.
arXiv Detail & Related papers (2022-10-28T23:53:37Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.