Eluding Secure Aggregation in Federated Learning via Model Inconsistency
- URL: http://arxiv.org/abs/2111.07380v1
- Date: Sun, 14 Nov 2021 16:09:11 GMT
- Title: Eluding Secure Aggregation in Federated Learning via Model Inconsistency
- Authors: Dario Pasquini, Danilo Francati and Giuseppe Ateniese
- Abstract summary: Federated learning allows a set of users to train a deep neural network over their private training datasets.
We show that a malicious server can easily elude secure aggregation as if the latter were not in place.
We devise two different attacks capable of inferring information on individual private training datasets.
- Score: 2.647302105102753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning allows a set of users to train a deep neural network over
their private training datasets. During the protocol, datasets never leave the
devices of the respective users. This is achieved by requiring each user to
send "only" model updates to a central server that, in turn, aggregates them to
update the parameters of the deep neural network. However, it has been shown
that each model update carries sensitive information about the user's dataset
(e.g., gradient inversion attacks).
The state-of-the-art implementations of federated learning protect these
model updates by leveraging secure aggregation: A cryptographic protocol that
securely computes the aggregation of the model updates of the users. Secure
aggregation is pivotal to protect users' privacy since it hinders the server
from learning the value and the source of the individual model updates provided
by the users, preventing inference and data attribution attacks.
In this work, we show that a malicious server can easily elude secure
aggregation as if the latter were not in place. We devise two different attacks
capable of inferring information on individual private training datasets,
independently of the number of users participating in the secure aggregation.
This makes them concrete threats in large-scale, real-world federated learning
applications.
The attacks are generic and do not target any specific secure aggregation
protocol. They are equally effective even if the secure aggregation protocol is
replaced by its ideal functionality that provides the perfect level of
security. Our work demonstrates that secure aggregation has been incorrectly
combined with federated learning and that current implementations offer only a
"false sense of security".
Related papers
- Uncovering Attacks and Defenses in Secure Aggregation for Federated Deep Learning [17.45950557331482]
Federated learning enables the collaborative learning of a global model on diverse data, preserving data locality and eliminating the need to transfer user data to a central server.
Secure aggregation protocols are designed to mask/encrypt user updates and enable a central server to aggregate the masked information.
MicroSecAgg (PoPETS 2024) proposes a single server secure aggregation protocol that aims to mitigate the high communication complexity of the existing approaches.
arXiv Detail & Related papers (2024-10-13T00:06:03Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Reconstructing Individual Data Points in Federated Learning Hardened
with Differential Privacy and Secure Aggregation [36.95590214441999]
Federated learning (FL) is a framework for users to jointly train a machine learning model.
We propose an attack against FL protected with distributed differential privacy (DDP) and secure aggregation (SA)
arXiv Detail & Related papers (2023-01-09T18:12:06Z) - Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with
Noise Tolerance in Federated Learning [0.0]
Federated learning aims to preserve data privacy while creating AI models.
Current approaches rely heavily on secure aggregation protocols to preserve data privacy.
We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious.
arXiv Detail & Related papers (2022-11-10T05:13:08Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z) - Gradient Disaggregation: Breaking Privacy in Federated Learning by
Reconstructing the User Participant Matrix [12.678765681171022]
We show that aggregated model updates in federated learning may be insecure.
An untrusted central server may disaggregate user updates from sums of updates across participants.
Our attack enables the attribution of learned properties to individual users, violating anonymity.
arXiv Detail & Related papers (2021-06-10T23:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.