Data and Model Poisoning Backdoor Attacks on Wireless Federated
Learning, and the Defense Mechanisms: A Comprehensive Survey
- URL: http://arxiv.org/abs/2312.08667v1
- Date: Thu, 14 Dec 2023 05:52:29 GMT
- Title: Data and Model Poisoning Backdoor Attacks on Wireless Federated
Learning, and the Defense Mechanisms: A Comprehensive Survey
- Authors: Yichen Wan, Youyang Qu, Wei Ni, Yong Xiang, Longxiang Gao, Ekram
Hossain
- Abstract summary: Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs)
In general, non-independent and identically distributed (non-IID) data of WCNs raises concerns about robustness.
This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms.
- Score: 28.88186038735176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the greatly improved capabilities of devices, massive data, and
increasing concern about data privacy, Federated Learning (FL) has been
increasingly considered for applications to wireless communication networks
(WCNs). Wireless FL (WFL) is a distributed method of training a global deep
learning model in which a large number of participants each train a local model
on their training datasets and then upload the local model updates to a central
server. However, in general, non-independent and identically distributed
(non-IID) data of WCNs raises concerns about robustness, as a malicious
participant could potentially inject a "backdoor" into the global model by
uploading poisoned data or models over WCN. This could cause the model to
misclassify malicious inputs as a specific target class while behaving normally
with benign inputs. This survey provides a comprehensive review of the latest
backdoor attacks and defense mechanisms. It classifies them according to their
targets (data poisoning or model poisoning), the attack phase (local data
collection, training, or aggregation), and defense stage (local training,
before aggregation, during aggregation, or after aggregation). The strengths
and limitations of existing attack strategies and defense mechanisms are
analyzed in detail. Comparisons of existing attack methods and defense designs
are carried out, pointing to noteworthy findings, open challenges, and
potential future research directions related to security and privacy of WFL.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Security and Privacy Issues and Solutions in Federated Learning for
Digital Healthcare [0.0]
We present vulnerabilities, attacks, and defenses based on the widened attack surfaces of Federated Learning.
We suggest promising new research directions toward a more robust FL.
arXiv Detail & Related papers (2024-01-16T16:07:53Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning [14.312593000209693]
Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of a central server.
The existence of a large number of heterogeneous devices makes FL vulnerable to various attacks, especially the stealthy backdoor attack.
We propose a new attack model for FL, namely Data-Agnostic Backdoor attack at the Server (DABS), where the server directly modifies the global model to backdoor an FL system.
arXiv Detail & Related papers (2023-05-02T09:04:34Z) - Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
and Future Research Directions [3.6086478979425998]
Federated learning (FL) is a machine learning (ML) approach that allows the use of distributed data without compromising personal privacy.
The heterogeneous distribution of data among clients in FL can make it difficult for the orchestration server to validate the integrity of local model updates.
Backdoor attacks involve the insertion of malicious functionality into a targeted model through poisoned updates from malicious clients.
arXiv Detail & Related papers (2023-03-03T20:54:28Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Untargeted Poisoning Attack Detection in Federated Learning via Behavior
Attestation [7.979659145328856]
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues.
Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits.
We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker.
arXiv Detail & Related papers (2021-01-24T20:52:55Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.