Fishing for User Data in Large-Batch Federated Learning via Gradient
Magnification
- URL: http://arxiv.org/abs/2202.00580v1
- Date: Tue, 1 Feb 2022 17:26:11 GMT
- Title: Fishing for User Data in Large-Batch Federated Learning via Gradient
Magnification
- Authors: Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, Tom Goldstein
- Abstract summary: Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency.
Previous works have exposed privacy vulnerabilities in the FL pipeline by recovering user data from gradient updates.
We introduce a new strategy that dramatically elevates existing attacks to operate on batches of arbitrarily large size.
- Score: 65.33308059737506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has rapidly risen in popularity due to its promise of
privacy and efficiency. Previous works have exposed privacy vulnerabilities in
the FL pipeline by recovering user data from gradient updates. However,
existing attacks fail to address realistic settings because they either 1)
require a `toy' settings with very small batch sizes, or 2) require unrealistic
and conspicuous architecture modifications. We introduce a new strategy that
dramatically elevates existing attacks to operate on batches of arbitrarily
large size, and without architectural modifications. Our model-agnostic
strategy only requires modifications to the model parameters sent to the user,
which is a realistic threat model in many scenarios. We demonstrate the
strategy in challenging large-scale settings, obtaining high-fidelity data
extraction in both cross-device and cross-silo federated learning.
Related papers
- Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - UIFV: Data Reconstruction Attack in Vertical Federated Learning [5.404398887781436]
Vertical Federated Learning (VFL) facilitates collaborative machine learning without the need for participants to share raw private data.
Recent studies have revealed privacy risks where adversaries might reconstruct sensitive features through data leakage during the learning process.
Our work exposes severe privacy vulnerabilities within VFL systems that pose real threats to practical VFL applications.
arXiv Detail & Related papers (2024-06-18T13:18:52Z) - Deep Leakage from Model in Federated Learning [6.001369927772649]
We present two novel frameworks to demonstrate that transmitting model weights is likely to leak private local data of clients.
We also introduce two defenses to the proposed attacks and evaluate their protection effects.
arXiv Detail & Related papers (2022-06-10T05:56:00Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z) - PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage [0.8029049649310213]
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients.
gradient perturbation techniques have been proposed to enhance privacy, but they come at the cost of reduced model performance, increased convergence time, or increased data demand.
We introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures.
arXiv Detail & Related papers (2021-08-10T14:43:17Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.