Gradient Inversion with Generative Image Prior
- URL: http://arxiv.org/abs/2110.14962v1
- Date: Thu, 28 Oct 2021 09:04:32 GMT
- Title: Gradient Inversion with Generative Image Prior
- Authors: Jinwoo Jeon and Jaechang Kim and Kangwook Lee and Sewoong Oh and
Jungseul Ok
- Abstract summary: Federated Learning (FL) is a distributed learning framework in which the local data never leaves clients devices to preserve privacy.
We show that data privacy can be easily breached by exploiting a generative model pretrained on the data distribution.
We experimentally show that the prior in a form of generative model is learnable from iterative interactions in FL.
- Score: 37.03737843861339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed learning framework, in which the
local data never leaves clients devices to preserve privacy, and the server
trains models on the data via accessing only the gradients of those local data.
Without further privacy mechanisms such as differential privacy, this leaves
the system vulnerable against an attacker who inverts those gradients to reveal
clients sensitive data. However, a gradient is often insufficient to
reconstruct the user data without any prior knowledge. By exploiting a
generative model pretrained on the data distribution, we demonstrate that data
privacy can be easily breached. Further, when such prior knowledge is
unavailable, we investigate the possibility of learning the prior from a
sequence of gradients seen in the process of FL training. We experimentally
show that the prior in a form of generative model is learnable from iterative
interactions in FL. Our findings strongly suggest that additional mechanisms
are necessary to prevent privacy leakage in FL.
Related papers
- Privacy Attack in Federated Learning is Not Easy: An Experimental Study [5.065947993017158]
Federated learning (FL) is an emerging distributed machine learning paradigm proposed for privacy preservation.
Recent studies have indicated that FL cannot entirely guarantee privacy protection.
It remains uncertain whether privacy attack FL algorithms are effective in realistic federated environments.
arXiv Detail & Related papers (2024-09-28T10:06:34Z) - Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey [27.859861825159342]
Deep learning has shown incredible potential across a vast array of tasks.
Recent concerns on privacy have further highlighted challenges for accessing such data.
Federated learning has emerged as an important privacy-preserving technology.
arXiv Detail & Related papers (2024-05-06T16:55:20Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage [9.83989883339971]
Federated Learning (FL) framework brings privacy benefits to distributed learning systems.
Recent studies have revealed that private information can still be leaked through shared information.
We propose a new type of leakage, i.e., Generative Gradient Leakage (GGL)
arXiv Detail & Related papers (2022-03-29T15:59:59Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
Learning [0.0]
We present BEAS, the first blockchain-based framework for N-party Federated Learning.
It provides strict privacy guarantees of training data using gradient pruning.
Anomaly detection protocols are used to minimize the risk of data-poisoning attacks.
We also define a novel protocol to prevent premature convergence in heterogeneous learning environments.
arXiv Detail & Related papers (2022-02-06T17:11:14Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.