Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling
- URL: http://arxiv.org/abs/2411.06458v1
- Date: Sun, 10 Nov 2024 13:17:11 GMT
- Title: Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling
- Authors: Andreas Athanasiou, Kangsoo Jung, Catuscia Palamidessi,
- Abstract summary: Federated Learning (FL) enables clients to train a joint model without disclosing their local data.
Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record.
We propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model.
- Score: 6.260747047974035
- License:
- Abstract: Federated Learning (FL) enables clients to train a joint model without disclosing their local data. Instead, they share their local model updates with a central server that moderates the process and creates a joint model. However, FL is susceptible to a series of privacy attacks. Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record. n this work, we propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model. We employ a combination of unary encoding with shuffling, which can effectively blend all clients' model updates, preventing the central server from inferring information about each client's model update separately. In order to address the increased communication cost of unary encoding we employ quantization. Our preliminary experiments show promising results; the proposed mechanism notably decreases the accuracy of SIAs without compromising the accuracy of the joint model.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Blockchain-based Optimized Client Selection and Privacy Preserved
Framework for Federated Learning [2.4201849657206496]
Federated learning is a distributed mechanism that trained large-scale neural network models with the participation of multiple clients.
With this feature, federated learning is considered a secure solution for data privacy issues.
We proposed the blockchain-based optimized client selection and privacy-preserved framework.
arXiv Detail & Related papers (2023-07-25T01:35:51Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients [51.973224448076614]
We propose the first Model Poisoning Attack based on Fake clients called MPAF.
MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted.
arXiv Detail & Related papers (2022-03-16T14:59:40Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.