Recovering Labels from Local Updates in Federated Learning
- URL: http://arxiv.org/abs/2405.00955v1
- Date: Thu, 2 May 2024 02:33:15 GMT
- Title: Recovering Labels from Local Updates in Federated Learning
- Authors: Huancheng Chen, Haris Vikalo,
- Abstract summary: Gradient (GI) attacks present a threat to the privacy of clients in federated learning (FL)
We present a novel label recovery scheme, Recovering Labels from Local Updates (RLU)
RLU achieves high performance even in realistic real-world settings where an FL system run multiple local epochs train on heterogeneous data.
- Score: 14.866327821524854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient inversion (GI) attacks present a threat to the privacy of clients in federated learning (FL) by aiming to enable reconstruction of the clients' data from communicated model updates. A number of such techniques attempts to accelerate data recovery by first reconstructing labels of the samples used in local training. However, existing label extraction methods make strong assumptions that typically do not hold in realistic FL settings. In this paper we present a novel label recovery scheme, Recovering Labels from Local Updates (RLU), which provides near-perfect accuracy when attacking untrained (most vulnerable) models. More significantly, RLU achieves high performance even in realistic real-world settings where the clients in an FL system run multiple local epochs, train on heterogeneous data, and deploy various optimizers to minimize different objective functions. Specifically, RLU estimates labels by solving a least-square problem that emerges from the analysis of the correlation between labels of the data points used in a training round and the resulting update of the output layer. The experimental results on several datasets, architectures, and data heterogeneity scenarios demonstrate that the proposed method consistently outperforms existing baselines, and helps improve quality of the reconstructed images in GI attacks in terms of both PSNR and LPIPS.
Related papers
- Attribute Inference Attacks for Federated Regression Tasks [14.152503562997662]
Federated Learning (FL) enables clients to collaboratively train a global machine learning model while keeping their data localized.
Recent studies have revealed that the training phase of FL is vulnerable to reconstruction attacks.
We propose novel model-based AIAs specifically designed for regression tasks in FL environments.
arXiv Detail & Related papers (2024-11-19T18:06:06Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning with Anomaly Detection via Gradient and Reconstruction Analysis [2.28438857884398]
We introduce a novel framework that synergizes gradient-based analysis with autoencoder-driven data reconstruction to detect and mitigate poisoned data with unprecedented precision.
Our method outperforms existing solutions by 15% in anomaly detection accuracy while maintaining a minimal false positive rate.
Our work paves the way for future advancements in distributed learning security.
arXiv Detail & Related papers (2024-03-15T03:54:45Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Tackling the Unlimited Staleness in Federated Learning with Intertwined Data and Device Heterogeneities [4.9851737525099225]
Federated Learning (FL) is often affected by both data and device heterogeneities.
In this paper, we present a new FL framework that leverages the gradient inversion technique for such conversion.
Our approach can significantly improve the trained model accuracy by up to 20% and speed up the FL training progress by up to 35%.
arXiv Detail & Related papers (2023-09-24T03:19:40Z) - Approximate and Weighted Data Reconstruction Attack in Federated Learning [1.802525429431034]
distributed learning (FL) enables clients to collaborate on building a machine learning model without sharing their private data.
Recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL.
We propose an approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes.
arXiv Detail & Related papers (2023-08-13T17:40:56Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - AGIC: Approximate Gradient Inversion Attack on Federated Learning [7.289310150187218]
Federated learning is a private-by-design distributed learning paradigm where clients train local models on their own data before a central server aggregates their local updates to compute a global model.
Recent reconstruction attacks apply a gradient inversion optimization on the gradient update of a single minibatch to reconstruct the private data used by clients during training.
We propose AGIC, a novel Approximate Gradient Inversion Attack that efficiently and effectively reconstructs images from both model or gradient updates.
arXiv Detail & Related papers (2022-04-28T21:15:59Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.