Deep leakage from gradients
- URL: http://arxiv.org/abs/2301.02621v1
- Date: Thu, 15 Dec 2022 08:06:46 GMT
- Title: Deep leakage from gradients
- Authors: Yaqiong Mu
- Abstract summary: Federated Learning (FL) model has been widely used in many industries for its high efficiency and confidentiality.
Some researchers have explored its confidentiality and designed some algorithms to attack training data sets.
In this paper, an algorithm based on gradient features is designed to attack the federated learning model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of artificial intelligence technology, Federated
Learning (FL) model has been widely used in many industries for its high
efficiency and confidentiality. Some researchers have explored its
confidentiality and designed some algorithms to attack training data sets, but
these algorithms all have their own limitations. Therefore, most people still
believe that local machine learning gradient information is safe and reliable.
In this paper, an algorithm based on gradient features is designed to attack
the federated learning model in order to attract more attention to the security
of federated learning systems. In federated learning system, gradient contains
little information compared with the original training data set, but this
project intends to restore the original training image data through gradient
information. Convolutional Neural Network (CNN) has excellent performance in
image processing. Therefore, the federated learning model of this project is
equipped with Convolutional Neural Network structure, and the model is trained
by using image data sets. The algorithm calculates the virtual gradient by
generating virtual image labels. Then the virtual gradient is matched with the
real gradient to restore the original image. This attack algorithm is written
in Python language, uses cat and dog classification Kaggle data sets, and
gradually extends from the full connection layer to the convolution layer, thus
improving the universality. At present, the average squared error between the
data recovered by this algorithm and the original image information is
approximately 5, and the vast majority of images can be completely restored
according to the gradient information given, indicating that the gradient of
federated learning system is not absolutely safe and reliable.
Related papers
- Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Genetic Programming-Based Evolutionary Deep Learning for Data-Efficient
Image Classification [3.9310727060473476]
This paper proposes a new genetic programming-based evolutionary deep learning approach to data-efficient image classification.
The new approach can automatically evolve variable-length models using many important operators from both image and classification domains.
A flexible multi-layer representation enables the new approach to automatically construct shallow or deep models/trees for different tasks.
arXiv Detail & Related papers (2022-09-27T08:10:16Z) - A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks [0.685316573653194]
Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed.
In this paper, we design a multi-pronged training, unbounded input transformation, and image ensemble system that is attack and not easily estimated.
arXiv Detail & Related papers (2022-08-25T02:58:47Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Understanding Training-Data Leakage from Gradients in Neural Networks
for Image Classification [11.272188531829016]
In many applications, we need to protect the training data from being leaked due to IP or privacy concerns.
Recent works have demonstrated that it is possible to reconstruct the training data from gradients for an image-classification model when its architecture is known.
We formulate the problem of training data reconstruction as solving an optimisation problem iteratively for each layer.
We are able to attribute the potential leakage of the training data in a deep network to its architecture.
arXiv Detail & Related papers (2021-11-19T12:14:43Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - See through Gradients: Image Batch Recovery via GradInversion [103.26922860665039]
We introduce GradInversion, using which input images from a larger batch can also be recovered for large networks such as ResNets (50 layers)
We show that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.
arXiv Detail & Related papers (2021-04-15T16:43:17Z) - Incremental Learning via Rate Reduction [26.323357617265163]
Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes.
We propose utilizing an alternative "white box" architecture derived from the principle of rate reduction, where each layer of the network is explicitly computed without back propagation.
Under this paradigm, we demonstrate that, given a pre-trained network and new data classes, our approach can provably construct a new network that emulates joint training with all past and new classes.
arXiv Detail & Related papers (2020-11-30T07:23:55Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.