Time-Distributed Backdoor Attacks on Federated Spiking Learning
- URL: http://arxiv.org/abs/2402.02886v1
- Date: Mon, 5 Feb 2024 10:54:17 GMT
- Title: Time-Distributed Backdoor Attacks on Federated Spiking Learning
- Authors: Gorka Abad, Stjepan Picek, Aitor Urbieta
- Abstract summary: This paper investigates the vulnerability of spiking neural networks (SNNs) and federated learning (FL) to backdoor attacks using neuromorphic data.
We develop a novel attack strategy tailored to SNNs and FL, which distributes the backdoor trigger temporally and across malicious devices.
This study underscores the need for robust security measures in deploying SNNs and FL.
- Score: 14.314066620468637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the vulnerability of spiking neural networks (SNNs)
and federated learning (FL) to backdoor attacks using neuromorphic data.
Despite the efficiency of SNNs and the privacy advantages of FL, particularly
in low-powered devices, we demonstrate that these systems are susceptible to
such attacks. We first assess the viability of using FL with SNNs using
neuromorphic data, showing its potential usage. Then, we evaluate the
transferability of known FL attack methods to SNNs, finding that these lead to
suboptimal attack performance. Therefore, we explore backdoor attacks involving
single and multiple attackers to improve the attack performance. Our primary
contribution is developing a novel attack strategy tailored to SNNs and FL,
which distributes the backdoor trigger temporally and across malicious devices,
enhancing the attack's effectiveness and stealthiness. In the best case, we
achieve a 100 attack success rate, 0.13 MSE, and 98.9 SSIM. Moreover, we adapt
and evaluate an existing defense against backdoor attacks, revealing its
inadequacy in protecting SNNs. This study underscores the need for robust
security measures in deploying SNNs and FL, particularly in the context of
backdoor attacks.
Related papers
- Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras [11.658496836117907]
We present the first evaluation of backdoor attacks in real-world environments on Spiking Neural Networks (SNNs)
We present three novel backdoor attack methods on SNNs, i.e., Framed, Strobing, and Flashy Backdoor.
Our results show that further research is needed to ensure the security of SNN-based systems against backdoor attacks and their safe application in real-world scenarios.
arXiv Detail & Related papers (2024-11-05T11:44:54Z) - Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks [3.9444202574850755]
Spiking Neural Networks (SNNs) are known for their low energy consumption and high robustness.
This paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks.
arXiv Detail & Related papers (2024-09-24T02:15:19Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
Networks with Neuromorphic Data [15.084703823643311]
spiking neural networks (SNNs) offer enhanced energy efficiency and biologically plausible data processing capabilities.
This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers.
We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy.
arXiv Detail & Related papers (2023-02-13T11:34:17Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
Improvements [33.309299864983295]
We propose BadNL, a general NLP backdoor attack framework including novel attack methods.
Our attacks achieve an almost perfect attack success rate with a negligible effect on the original model's utility.
arXiv Detail & Related papers (2020-06-01T16:17:14Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.