Client-specific Property Inference against Secure Aggregation in
Federated Learning
- URL: http://arxiv.org/abs/2303.03908v2
- Date: Fri, 27 Oct 2023 21:43:01 GMT
- Title: Client-specific Property Inference against Secure Aggregation in
Federated Learning
- Authors: Raouf Kerkouche, Gergely \'Acs, Mario Fritz
- Abstract summary: Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
- Score: 52.8564467292226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has become a widely used paradigm for collaboratively
training a common model among different participants with the help of a central
server that coordinates the training. Although only the model parameters or
other model updates are exchanged during the federated training instead of the
participant's data, many attacks have shown that it is still possible to infer
sensitive information such as membership, property, or outright reconstruction
of participant data. Although differential privacy is considered an effective
solution to protect against privacy attacks, it is also criticized for its
negative effect on utility. Another possible defense is to use secure
aggregation which allows the server to only access the aggregated update
instead of each individual one, and it is often more appealing because it does
not degrade model quality. However, combining only the aggregated updates,
which are generated by a different composition of clients in every round, may
still allow the inference of some client-specific information. In this paper,
we show that simple linear models can effectively capture client-specific
properties only from the aggregated model updates due to the linearity of
aggregation. We formulate an optimization problem across different rounds in
order to infer a tested property of every client from the output of the linear
models, for example, whether they have a specific sample in their training data
(membership inference) or whether they misbehave and attempt to degrade the
performance of the common model by poisoning attacks. Our reconstruction
technique is completely passive and undetectable. We demonstrate the efficacy
of our approach on several scenarios which shows that secure aggregation
provides very limited privacy guarantees in practice. The source code will be
released upon publication.
Related papers
- Efficient Federated Unlearning under Plausible Deniability [1.795561427808824]
Machine unlearning addresses this by modifying the ML parameters in order to forget the influence of a specific data point on its weights.
Recent literature has highlighted that the contribution from data point(s) can be forged with some other data points in the dataset with probability close to one.
This paper introduces an efficient way to achieve federated unlearning, by employing a privacy model which allows the FL server to plausibly deny the client's participation.
arXiv Detail & Related papers (2024-10-13T18:08:24Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Perfectly Accurate Membership Inference by a Dishonest Central Server in
Federated Learning [34.13555530204307]
Federated Learning is expected to provide strong privacy guarantees.
We introduce a simple but still very effective membership inference attack algorithm.
Our method provides perfect accuracy in identifying one sample in a training set with thousands of samples.
arXiv Detail & Related papers (2022-03-30T17:01:19Z) - A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning [14.134217287912008]
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients.
Recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks.
We present a principled framework for evaluating and comparing different forms of client privacy leakage attacks.
arXiv Detail & Related papers (2020-04-22T05:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.