Gradient Obfuscation Gives a False Sense of Security in Federated
Learning
- URL: http://arxiv.org/abs/2206.04055v1
- Date: Wed, 8 Jun 2022 13:01:09 GMT
- Title: Gradient Obfuscation Gives a False Sense of Security in Federated
Learning
- Authors: Kai Yue, Richeng Jin, Chau-Wai Wong, Dror Baron, Huaiyu Dai
- Abstract summary: We present a new data reconstruction attack framework targeting the image classification task in federated learning.
Contrary to prior studies, we argue that privacy enhancement should not be treated as a byproduct of gradient compression.
- Score: 41.36621813381792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has been proposed as a privacy-preserving machine learning
framework that enables multiple clients to collaborate without sharing raw
data. However, client privacy protection is not guaranteed by design in this
framework. Prior work has shown that the gradient sharing strategies in
federated learning can be vulnerable to data reconstruction attacks. In
practice, though, clients may not transmit raw gradients considering the high
communication cost or due to privacy enhancement requirements. Empirical
studies have demonstrated that gradient obfuscation, including intentional
obfuscation via gradient noise injection and unintentional obfuscation via
gradient compression, can provide more privacy protection against
reconstruction attacks. In this work, we present a new data reconstruction
attack framework targeting the image classification task in federated learning.
We show that commonly adopted gradient postprocessing procedures, such as
gradient quantization, gradient sparsification, and gradient perturbation, may
give a false sense of security in federated learning. Contrary to prior
studies, we argue that privacy enhancement should not be treated as a byproduct
of gradient compression. Additionally, we design a new method under the
proposed framework to reconstruct the image at the semantic level. We quantify
the semantic privacy leakage and compare with conventional based on image
similarity scores. Our comparisons challenge the image data leakage evaluation
schemes in the literature. The results emphasize the importance of revisiting
and redesigning the privacy protection mechanisms for client data in existing
federated learning algorithms.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - Concealing Sensitive Samples against Gradient Leakage in Federated
Learning [41.43099791763444]
Federated Learning (FL) is a distributed learning paradigm that enhances users privacy by eliminating the need for clients to share raw, private data with the server.
Recent studies expose the vulnerability of FL to model inversion attacks, where adversaries reconstruct users private data via eavesdropping on the shared gradient information.
We present a simple, yet effective defense strategy that obfuscates the gradients of the sensitive data with concealed samples.
arXiv Detail & Related papers (2022-09-13T04:19:35Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z) - Aggregating Gradients in Encoded Domain for Federated Learning [19.12395694047359]
Malicious attackers and an honest-but-curious server can steal private client data from uploaded gradients in federated learning.
We propose the textttFedAGE framework, which enables the server to aggregate gradients in an encoded domain without accessing raw gradients of any single client.
arXiv Detail & Related papers (2022-05-26T08:20:19Z) - Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage [9.83989883339971]
Federated Learning (FL) framework brings privacy benefits to distributed learning systems.
Recent studies have revealed that private information can still be leaked through shared information.
We propose a new type of leakage, i.e., Generative Gradient Leakage (GGL)
arXiv Detail & Related papers (2022-03-29T15:59:59Z) - Federated Learning for Face Recognition with Gradient Correction [52.896286647898386]
In this work, we introduce a framework, FedGC, to tackle federated learning for face recognition.
We show that FedGC constitutes a valid loss function similar to standard softmax.
arXiv Detail & Related papers (2021-12-14T09:19:29Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - FedBoosting: Federated Learning with Gradient Protected Boosting for
Text Recognition [7.988454173034258]
Federated Learning (FL) framework allows learning a shared model collaboratively without data being centralized or shared among data owners.
We show in this paper that the generalization ability of the joint model is poor on Non-Independent and Non-Identically Distributed (Non-IID) data.
We propose a novel boosting algorithm for FL to address both the generalization and gradient leakage issues.
arXiv Detail & Related papers (2020-07-14T18:47:23Z) - Inverting Gradients -- How easy is it to break privacy in federated
learning? [13.632998588216523]
federated learning is designed to collaboratively train a neural network on a server.
Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data.
Previous attacks have provided a false sense of security, by succeeding only in contrived settings.
We show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients.
arXiv Detail & Related papers (2020-03-31T09:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.