Robust Federated Learning for execution time-based device model
identification under label-flipping attack
- URL: http://arxiv.org/abs/2111.14434v1
- Date: Mon, 29 Nov 2021 10:27:14 GMT
- Title: Robust Federated Learning for execution time-based device model
identification under label-flipping attack
- Authors: Pedro Miguel S\'anchez S\'anchez, Alberto Huertas Celdr\'an, Jos\'e
Rafael Buend\'ia Rubio, G\'er\^ome Bovet, Gregorio Mart\'inez P\'erez
- Abstract summary: Device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched.
Several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques.
New approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The computing device deployment explosion experienced in recent years,
motivated by the advances of technologies such as Internet-of-Things (IoT) and
5G, has led to a global scenario with increasing cybersecurity risks and
threats. Among them, device spoofing and impersonation cyberattacks stand out
due to their impact and, usually, low complexity required to be launched. To
solve this issue, several solutions have emerged to identify device models and
types based on the combination of behavioral fingerprinting and Machine/Deep
Learning (ML/DL) techniques. However, these solutions are not appropriated for
scenarios where data privacy and protection is a must, as they require data
centralization for processing. In this context, newer approaches such as
Federated Learning (FL) have not been fully explored yet, especially when
malicious clients are present in the scenario setup. The present work analyzes
and compares the device model identification performance of a centralized DL
model with an FL one while using execution time-based events. For experimental
purposes, a dataset containing execution-time features of 55 Raspberry Pis
belonging to four different models has been collected and published. Using this
dataset, the proposed solution achieved 0.9999 accuracy in both setups,
centralized and federated, showing no performance decrease while preserving
data privacy. Later, the impact of a label-flipping attack during the federated
model training is evaluated, using several aggregation mechanisms as
countermeasure. Zeno and coordinate-wise median aggregation show the best
performance, although their performance greatly degrades when the percentage of
fully malicious clients (all training samples poisoned) grows over 50%.
Related papers
- Efficient Federated Intrusion Detection in 5G ecosystem using optimized BERT-based model [0.7100520098029439]
5G offers advanced services, supporting applications such as intelligent transportation, connected healthcare, and smart cities within the Internet of Things (IoT)
These advancements introduce significant security challenges, with increasingly sophisticated cyber-attacks.
This paper proposes a robust intrusion detection system (IDS) using federated learning and large language models (LLMs)
arXiv Detail & Related papers (2024-09-28T15:56:28Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Adversarial attacks and defenses on ML- and hardware-based IoT device
fingerprinting and identification [0.0]
This work proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification.
Previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices.
adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks.
arXiv Detail & Related papers (2022-12-30T13:11:35Z) - Securing Federated Learning against Overwhelming Collusive Attackers [7.587927338603662]
We propose two graph theoretic algorithms, based on Minimum Spanning Tree and k-Densest graph, by leveraging correlations between local models.
Our FL model can nullify the influence of attackers even when they are up to 70% of all the clients.
We establish the superiority of our algorithms over the existing ones using accuracy, attack success rate, and early detection round.
arXiv Detail & Related papers (2022-09-28T13:41:04Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Applied Federated Learning: Architectural Design for Robust and
Efficient Learning in Privacy Aware Settings [0.8454446648908585]
The classical machine learning paradigm requires the aggregation of user data in a central location.
Centralization of data poses risks, including a heightened risk of internal and external security incidents.
Federated learning with differential privacy is designed to avoid the server-side centralization pitfall.
arXiv Detail & Related papers (2022-06-02T00:30:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.