Turning Privacy-preserving Mechanisms against Federated Learning
- URL: http://arxiv.org/abs/2305.05355v1
- Date: Tue, 9 May 2023 11:43:31 GMT
- Title: Turning Privacy-preserving Mechanisms against Federated Learning
- Authors: Marco Arazzi, Mauro Conti, Antonino Nocera and Stjepan Picek
- Abstract summary: We design an attack capable of deceiving state-of-the-art defenses for federated learning.
The proposed attack includes two operating modes, the first one focusing on convergence inhibition (Adversarial Mode) and the second one aiming at building a deceptive rating injection on the global federated model (Backdoor Mode)
The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode.
- Score: 22.88443008209519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, researchers have successfully employed Graph Neural Networks (GNNs)
to build enhanced recommender systems due to their capability to learn patterns
from the interaction between involved entities. In addition, previous studies
have investigated federated learning as the main solution to enable a native
privacy-preserving mechanism for the construction of global GNN models without
collecting sensitive data into a single computation unit. Still, privacy issues
may arise as the analysis of local model updates produced by the federated
clients can return information related to sensitive local data. For this
reason, experts proposed solutions that combine federated learning with
Differential Privacy strategies and community-driven approaches, which involve
combining data from neighbor clients to make the individual local updates less
dependent on local sensitive data. In this paper, we identify a crucial
security flaw in such a configuration, and we design an attack capable of
deceiving state-of-the-art defenses for federated learning. The proposed attack
includes two operating modes, the first one focusing on convergence inhibition
(Adversarial Mode), and the second one aiming at building a deceptive rating
injection on the global federated model (Backdoor Mode). The experimental
results show the effectiveness of our attack in both its modes, returning on
average 60% performance detriment in all the tests on Adversarial Mode and
fully effective backdoors in 93% of cases for the tests performed on Backdoor
Mode.
Related papers
- Attribute Inference Attacks for Federated Regression Tasks [14.152503562997662]
Federated Learning (FL) enables clients to collaboratively train a global machine learning model while keeping their data localized.
Recent studies have revealed that the training phase of FL is vulnerable to reconstruction attacks.
We propose novel model-based AIAs specifically designed for regression tasks in FL environments.
arXiv Detail & Related papers (2024-11-19T18:06:06Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Identifying Backdoor Attacks in Federated Learning via Anomaly Detection [31.197488921578984]
Federated learning is vulnerable to backdoor attacks.
This paper proposes an effective defense against the attack by examining shared model updates.
We demonstrate through extensive analyses that our proposed methods effectively mitigate state-of-the-art backdoor attacks.
arXiv Detail & Related papers (2022-02-09T07:07:42Z) - Information Stealing in Federated Learning Systems Based on Generative
Adversarial Networks [0.5156484100374059]
We mounted adversarial attacks on a federated learning (FL) environment using three different datasets.
The attacks leveraged generative adversarial networks (GANs) to affect the learning process.
We reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.
arXiv Detail & Related papers (2021-08-02T08:12:43Z) - RobustFed: A Truth Inference Approach for Robust Federated Learning [9.316565110931743]
Federated learning is a framework that enables clients to train a collaboratively global model under a central server's orchestration.
The aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior.
We propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing.
arXiv Detail & Related papers (2021-07-18T09:34:57Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.