An Overview of Federated Deep Learning Privacy Attacks and Defensive
Strategies
- URL: http://arxiv.org/abs/2004.04676v1
- Date: Wed, 1 Apr 2020 12:41:45 GMT
- Title: An Overview of Federated Deep Learning Privacy Attacks and Defensive
Strategies
- Authors: David Enthoven and Zaid Al-Ars
- Abstract summary: Collaborative machine learning (ML) algorithms are being developed to ensure the protection of private data used for processing.
Federated learning (FL) is the most popular of these methods.
Recent studies showed that such model updates may still very well leak private information.
- Score: 1.370633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increased attention and legislation for data-privacy, collaborative
machine learning (ML) algorithms are being developed to ensure the protection
of private data used for processing. Federated learning (FL) is the most
popular of these methods, which provides privacy preservation by facilitating
collaborative training of a shared model without the need to exchange any
private data with a centralized server. Rather, an abstraction of the data in
the form of a machine learning model update is sent. Recent studies showed that
such model updates may still very well leak private information and thus more
structured risk assessment is needed. In this paper, we analyze existing
vulnerabilities of FL and subsequently perform a literature review of the
possible attack methods targetingFL privacy protection capabilities. These
attack methods are then categorized by a basic taxonomy. Additionally, we
provide a literature study of the most recent defensive strategies and
algorithms for FL aimed to overcome these attacks. These defensive strategies
are categorized by their respective underlying defence principle. The paper
concludes that the application of a single defensive strategy is not enough to
provide adequate protection to all available attack methods.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - A Survey on Vulnerability of Federated Learning: A Learning Algorithm
Perspective [8.941193384980147]
We focus on threat models targeting the learning process of FL systems.
Defense strategies have evolved from using a singular metric to excluding malicious clients.
Recent endeavors subtly alter the least significant weights in local models to bypass defense measures.
arXiv Detail & Related papers (2023-11-27T18:32:08Z) - Deep Leakage from Model in Federated Learning [6.001369927772649]
We present two novel frameworks to demonstrate that transmitting model weights is likely to leak private local data of clients.
We also introduce two defenses to the proposed attacks and evaluate their protection effects.
arXiv Detail & Related papers (2022-06-10T05:56:00Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.