Towards General Deep Leakage in Federated Learning
- URL: http://arxiv.org/abs/2110.09074v1
- Date: Mon, 18 Oct 2021 07:49:52 GMT
- Title: Towards General Deep Leakage in Federated Learning
- Authors: Jiahui Geng, Yongli Mou, Feifei Li, Qing Li, Oya Beyan, Stefan Decker,
Chunming Rong
- Abstract summary: federated learning (FL) improves the performance of the global model by sharing and aggregating local models rather than local data to protect the users' privacy.
Some research has demonstrated that an attacker can still recover private data based on the shared gradient information.
We propose methods that can reconstruct the training data from shared gradients or weights, corresponding to the FedSGD and FedAvg usage scenarios.
- Score: 13.643899029738474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unlike traditional central training, federated learning (FL) improves the
performance of the global model by sharing and aggregating local models rather
than local data to protect the users' privacy. Although this training approach
appears secure, some research has demonstrated that an attacker can still
recover private data based on the shared gradient information. This on-the-fly
reconstruction attack deserves to be studied in depth because it can occur at
any stage of training, whether at the beginning or at the end of model
training; no relevant dataset is required and no additional models need to be
trained. We break through some unrealistic assumptions and limitations to apply
this reconstruction attack in a broader range of scenarios. We propose methods
that can reconstruct the training data from shared gradients or weights,
corresponding to the FedSGD and FedAvg usage scenarios, respectively. We
propose a zero-shot approach to restore labels even if there are duplicate
labels in the batch. We study the relationship between the label and image
restoration. We find that image restoration fails even if there is only one
incorrectly inferred label in the batch; we also find that when batch images
have the same label, the corresponding image is restored as a fusion of that
class of images. Our approaches are evaluated on classic image benchmarks,
including CIFAR-10 and ImageNet. The batch size, image quality, and the
adaptability of the label distribution of our approach exceed those of
GradInversion, the state-of-the-art.
Related papers
- Review Learning: Advancing All-in-One Ultra-High-Definition Image Restoration Training Method [7.487270862599671]
We propose a new training paradigm for general image restoration models, which we name bfReview Learning.
This approach begins with sequential training of an image restoration model on several degraded datasets, combined with a review mechanism.
We design a lightweight all-purpose image restoration network that can efficiently reason about degraded images with 4K resolution on a single consumer-grade GPU.
arXiv Detail & Related papers (2024-08-13T08:08:45Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Data Attribution for Text-to-Image Models by Unlearning Synthesized Images [71.23012718682634]
The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image.
We propose a new approach that efficiently identifies highly-influential images.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - AFGI: Towards Accurate and Fast-convergent Gradient Inversion Attack in Federated Learning [13.104809524506132]
Federated learning (FL) empowers privacypreservation in model training by only exposing users' model gradients.
Yet, FL users are susceptible to gradient inversion attacks (GIAs) which can reconstruct ground-truth training data.
We present an Accurate and Fast-convergent Gradient Inversion attack algorithm, called AFGI, with two components.
arXiv Detail & Related papers (2024-03-13T09:48:04Z) - MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge
Detection on Federated Learning [6.721419921063687]
We present a novel gradient inversion strategy based on canny edge detection (MGIC) in both the multi-label and single-label datasets.
Our proposed strategy has better visual inversion image results than the most widely used ones, saving more than 78% of time costs in the ImageNet dataset.
arXiv Detail & Related papers (2024-03-13T06:34:49Z) - Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks [88.12362924175741]
Gradient inversion attacks aim to reconstruct local training data from intermediate gradients exposed in the federated learning framework.
Previous methods, starting from reconstructing a single data point and then relaxing the single-image limit to batch level, are only tested under hard label constraints.
We are the first to initiate a novel algorithm to simultaneously recover the ground-truth augmented label and the input feature of the last fully-connected layer from single-input gradients.
arXiv Detail & Related papers (2024-02-05T15:51:34Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - More Photos are All You Need: Semi-Supervised Learning for Fine-Grained
Sketch Based Image Retrieval [112.1756171062067]
We introduce a novel semi-supervised framework for cross-modal retrieval.
At the centre of our design is a sequential photo-to-sketch generation model.
We also introduce a discriminator guided mechanism to guide against unfaithful generation.
arXiv Detail & Related papers (2021-03-25T17:27:08Z) - Background Splitting: Finding Rare Classes in a Sea of Background [55.03789745276442]
We focus on the real-world problem of training accurate deep models for image classification of a small number of rare categories.
In these scenarios, almost all images belong to the background category in the dataset (>95% of the dataset is background)
We demonstrate that both standard fine-tuning approaches and state-of-the-art approaches for training on imbalanced datasets do not produce accurate deep models in the presence of this extreme imbalance.
arXiv Detail & Related papers (2020-08-28T23:05:15Z) - Arbitrary-sized Image Training and Residual Kernel Learning: Towards
Image Fraud Identification [10.47223719403823]
We propose a framework for training images of original input scales without resizing.
Our arbitrary-sized image training method depends on the pseudo-batch gradient descent.
With the learnt residual kernels and PBGD, the proposed framework achieved the state-of-the-art results in image fraud identification.
arXiv Detail & Related papers (2020-05-22T07:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.