Gradient Disaggregation: Breaking Privacy in Federated Learning by
Reconstructing the User Participant Matrix
- URL: http://arxiv.org/abs/2106.06089v1
- Date: Thu, 10 Jun 2021 23:55:28 GMT
- Title: Gradient Disaggregation: Breaking Privacy in Federated Learning by
Reconstructing the User Participant Matrix
- Authors: Maximilian Lam, Gu-Yeon Wei, David Brooks, Vijay Janapa Reddi, Michael
Mitzenmacher
- Abstract summary: We show that aggregated model updates in federated learning may be insecure.
An untrusted central server may disaggregate user updates from sums of updates across participants.
Our attack enables the attribution of learned properties to individual users, violating anonymity.
- Score: 12.678765681171022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that aggregated model updates in federated learning may be insecure.
An untrusted central server may disaggregate user updates from sums of updates
across participants given repeated observations, enabling the server to recover
privileged information about individual users' private training data via
traditional gradient inference attacks. Our method revolves around
reconstructing participant information (e.g: which rounds of training users
participated in) from aggregated model updates by leveraging summary
information from device analytics commonly used to monitor, debug, and manage
federated learning systems. Our attack is parallelizable and we successfully
disaggregate user updates on settings with up to thousands of participants. We
quantitatively and qualitatively demonstrate significant improvements in the
capability of various inference attacks on the disaggregated updates. Our
attack enables the attribution of learned properties to individual users,
violating anonymity, and shows that a determined central server may undermine
the secure aggregation protocol to break individual users' data privacy in
federated learning.
Related papers
- Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - zPROBE: Zero Peek Robustness Checks for Federated Learning [18.84828158927185]
Privacy-preserving federated learning allows multiple users to jointly train a model with coordination of a central server.
Keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected.
Our framework, zPROBE, enables Byzantine resilient and secure federated learning.
arXiv Detail & Related papers (2022-06-24T06:20:37Z) - Eluding Secure Aggregation in Federated Learning via Model Inconsistency [2.647302105102753]
Federated learning allows a set of users to train a deep neural network over their private training datasets.
We show that a malicious server can easily elude secure aggregation as if the latter were not in place.
We devise two different attacks capable of inferring information on individual private training datasets.
arXiv Detail & Related papers (2021-11-14T16:09:11Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Fidel: Reconstructing Private Training Samples from Weight Updates in
Federated Learning [0.0]
We evaluate a novel attack method within regular federated learning which we name the First Dense Layer Attack (Fidel)
We show how to recover on average twenty out of thirty private data samples from a client's model update employing a fully connected neural network.
arXiv Detail & Related papers (2021-01-01T04:00:23Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.