See through Gradients: Image Batch Recovery via GradInversion
- URL: http://arxiv.org/abs/2104.07586v1
- Date: Thu, 15 Apr 2021 16:43:17 GMT
- Title: See through Gradients: Image Batch Recovery via GradInversion
- Authors: Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz,
Pavlo Molchanov
- Abstract summary: We introduce GradInversion, using which input images from a larger batch can also be recovered for large networks such as ResNets (50 layers)
We show that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.
- Score: 103.26922860665039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training deep neural networks requires gradient estimation from data batches
to update parameters. Gradients per parameter are averaged over a set of data
and this has been presumed to be safe for privacy-preserving training in joint,
collaborative, and federated learning applications. Prior work only showed the
possibility of recovering input data given gradients under very restrictive
conditions - a single input point, or a network with no non-linearities, or a
small 32x32 px input batch. Therefore, averaging gradients over larger batches
was thought to be safe. In this work, we introduce GradInversion, using which
input images from a larger batch (8 - 48 images) can also be recovered for
large networks such as ResNets (50 layers), on complex datasets such as
ImageNet (1000 classes, 224x224 px). We formulate an optimization task that
converts random noise into natural images, matching gradients while
regularizing image fidelity. We also propose an algorithm for target class
label recovery given gradients. We further propose a group consistency
regularization framework, where multiple agents starting from different random
seeds work together to find an enhanced reconstruction of original data batch.
We show that gradients encode a surprisingly large amount of information, such
that all the individual images can be recovered with high fidelity via
GradInversion, even for complex datasets, deep networks, and large batch sizes.
Related papers
- SPEAR:Exact Gradient Inversion of Batches in Federated Learning [11.799563040751591]
Federated learning is a framework for machine learning where clients only share gradient updates and not their private data with a server.
We propose SPEAR, the first algorithm reconstructing whole batches with $b >1$ exactly.
We show that it recovers high-dimensional ImageNet inputs in batches of up to $b lesssim 25$ exactly while scaling to large networks.
arXiv Detail & Related papers (2024-03-06T18:52:39Z) - AugUndo: Scaling Up Augmentations for Monocular Depth Completion and Estimation [51.143540967290114]
We propose a method that unlocks a wide range of previously-infeasible geometric augmentations for unsupervised depth computation and estimation.
This is achieved by reversing, or undo''-ing, geometric transformations to the coordinates of the output depth, warping the depth map back to the original reference frame.
arXiv Detail & Related papers (2023-10-15T05:15:45Z) - Dataset Quantization [72.61936019738076]
We present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets.
DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio.
arXiv Detail & Related papers (2023-08-21T07:24:29Z) - Deep leakage from gradients [0.0]
Federated Learning (FL) model has been widely used in many industries for its high efficiency and confidentiality.
Some researchers have explored its confidentiality and designed some algorithms to attack training data sets.
In this paper, an algorithm based on gradient features is designed to attack the federated learning model.
arXiv Detail & Related papers (2022-12-15T08:06:46Z) - Deep Generalized Unfolding Networks for Image Restoration [16.943609020362395]
We propose a Deep Generalized Unfolding Network (DGUNet) for image restoration.
We integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm.
Our method is superior in terms of state-of-the-art performance, interpretability, and generalizability.
arXiv Detail & Related papers (2022-04-28T08:39:39Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images [42.26124628784883]
We propose a compact, efficient, and end-to-end learning-based framework, namely AGD-Net.
We first formulate the problem explicitly based on the classic gradient descent algorithm.
AGD-Net can improve the reconstruction quality by more than 1.0 dB on average.
arXiv Detail & Related papers (2021-08-12T05:54:09Z) - Sparse Communication for Training Deep Networks [56.441077560085475]
Synchronous gradient descent (SGD) is the most common method used for distributed training of deep learning models.
In this algorithm, each worker shares its local gradients with others and updates the parameters using the average gradients of all workers.
We study several compression schemes and identify how three key parameters affect the performance.
arXiv Detail & Related papers (2020-09-19T17:28:11Z) - A deep primal-dual proximal network for image restoration [8.797434238081372]
We design a deep network, named DeepPDNet, built from primal-dual iterations associated with the minimization of a standard penalized likelihood with an analysis prior.
Two different learning strategies: "Full learning" and "Partial learning" are proposed, the first one is the most efficient numerically.
Extensive results show that the proposed DeepPDNet demonstrates excellent performance on the MNIST and the more complex BSD68, BSD100, and SET14 datasets for image restoration and single image super-resolution task.
arXiv Detail & Related papers (2020-07-02T08:29:52Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.