Reputation-Based Federated Learning Defense to Mitigate Threats in EEG
Signal Classification
- URL: http://arxiv.org/abs/2401.01896v1
- Date: Sun, 22 Oct 2023 08:08:15 GMT
- Title: Reputation-Based Federated Learning Defense to Mitigate Threats in EEG
Signal Classification
- Authors: Zhibo Zhang, Pengfei Li, Ahmed Y. Al Hammadi, Fusen Guo, Ernesto
Damiani, Chan Yeob Yeun
- Abstract summary: It is difficult to create efficient learning models for EEG analysis because of the distributed nature of EEG data and related privacy and security concerns.
This paper presents a reputation-based threat mitigation framework that defends potential security threats in electroencephalogram (EEG) signal classification.
- Score: 10.57197051973977
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper presents a reputation-based threat mitigation framework that
defends potential security threats in electroencephalogram (EEG) signal
classification during model aggregation of Federated Learning. While EEG signal
analysis has attracted attention because of the emergence of brain-computer
interface (BCI) technology, it is difficult to create efficient learning models
for EEG analysis because of the distributed nature of EEG data and related
privacy and security concerns. To address these challenges, the proposed
defending framework leverages the Federated Learning paradigm to preserve
privacy by collaborative model training with localized data from dispersed
sources and introduces a reputation-based mechanism to mitigate the influence
of data poisoning attacks and identify compromised participants. To assess the
efficiency of the proposed reputation-based federated learning defense
framework, data poisoning attacks based on the risk level of training data
derived by Explainable Artificial Intelligence (XAI) techniques are conducted
on both publicly available EEG signal datasets and the self-established EEG
signal dataset. Experimental results on the poisoned datasets show that the
proposed defense methodology performs well in EEG signal classification while
reducing the risks associated with security threats.
Related papers
- Data Poisoning and Leakage Analysis in Federated Learning [10.090442512374661]
Data poisoning and leakage risks impede the massive deployment of federated learning in the real world.
This chapter reveals the truths and pitfalls of understanding two dominating threats: em training data privacy intrusion and em training data poisoning
arXiv Detail & Related papers (2024-09-19T16:50:29Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning [4.475514208635884]
This study systematically profiles the (in)feasibility of learning for privacy-preserving cyber threat detection in terms of effectiveness, byzantine resilience, and efficiency.
It shows that FL-trained detection models can achieve a performance that is comparable to centrally trained counterparts.
Under a realistic threat model, FL turns out to be adversary-resistant to attacks of both data poisoning and model poisoning.
arXiv Detail & Related papers (2024-04-08T01:16:56Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection [0.0]
In cybersecurity, the sensitive data along with the contextual information and high-quality labeling play an essential role.
In this paper, we investigate a novel robust aggregation method for federated learning, namely Fed-LSAE, which takes advantage of latent space representation.
The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks.
arXiv Detail & Related papers (2023-09-20T04:14:48Z) - Explainable Data Poison Attacks on Human Emotion Evaluation Systems
based on EEG Signals [3.8523826400372783]
This paper explains the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems.
EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks.
arXiv Detail & Related papers (2023-01-17T14:44:46Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [68.01125081367428]
Recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks.
This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before.
arXiv Detail & Related papers (2020-10-30T20:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.