A Survey on Vulnerability of Federated Learning: A Learning Algorithm
Perspective
- URL: http://arxiv.org/abs/2311.16065v1
- Date: Mon, 27 Nov 2023 18:32:08 GMT
- Title: A Survey on Vulnerability of Federated Learning: A Learning Algorithm
Perspective
- Authors: Xianghua Xie, Chen Hu, Hanchi Ren, Jingjing Deng
- Abstract summary: We focus on threat models targeting the learning process of FL systems.
Defense strategies have evolved from using a singular metric to excluding malicious clients.
Recent endeavors subtly alter the least significant weights in local models to bypass defense measures.
- Score: 8.941193384980147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This review paper takes a comprehensive look at malicious attacks against FL,
categorizing them from new perspectives on attack origins and targets, and
providing insights into their methodology and impact. In this survey, we focus
on threat models targeting the learning process of FL systems. Based on the
source and target of the attack, we categorize existing threat models into four
types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and
composite attacks. For each attack type, we discuss the defense strategies
proposed, highlighting their effectiveness, assumptions and potential areas for
improvement. Defense strategies have evolved from using a singular metric to
excluding malicious clients, to employing a multifaceted approach examining
client models at various phases. In this survey paper, our research indicates
that the to-learn data, the learning gradients, and the learned model at
different stages all can be manipulated to initiate malicious attacks that
range from undermining model performance, reconstructing private local data,
and to inserting backdoors. We have also seen these threat are becoming more
insidious. While earlier studies typically amplified malicious gradients,
recent endeavors subtly alter the least significant weights in local models to
bypass defense measures. This literature review provides a holistic
understanding of the current FL threat landscape and highlights the importance
of developing robust, efficient, and privacy-preserving defenses to ensure the
safe and trusted adoption of FL in real-world applications.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks [0.0]
Adversarial attacks pose significant threats to the robustness of deep learning models in image classification.
This paper explores and refines defense mechanisms against these attacks to enhance the resilience of neural networks.
arXiv Detail & Related papers (2024-08-20T02:00:02Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy [10.962424750173332]
Federated Learning (FL) has emerged as a leading paradigm for decentralized, privacy preserving machine learning training.
Recent research on gradient inversion attacks (GIAs) have shown that gradient updates in FL can leak information on private training samples.
We present a survey and novel taxonomy of GIAs that emphasize FL threat models, particularly that of malicious servers and clients.
arXiv Detail & Related papers (2024-05-16T18:15:38Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Deep Leakage from Model in Federated Learning [6.001369927772649]
We present two novel frameworks to demonstrate that transmitting model weights is likely to leak private local data of clients.
We also introduce two defenses to the proposed attacks and evaluate their protection effects.
arXiv Detail & Related papers (2022-06-10T05:56:00Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Identifying Backdoor Attacks in Federated Learning via Anomaly Detection [31.197488921578984]
Federated learning is vulnerable to backdoor attacks.
This paper proposes an effective defense against the attack by examining shared model updates.
We demonstrate through extensive analyses that our proposed methods effectively mitigate state-of-the-art backdoor attacks.
arXiv Detail & Related papers (2022-02-09T07:07:42Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.