Enhancing Privacy against Inversion Attacks in Federated Learning by
using Mixing Gradients Strategies
- URL: http://arxiv.org/abs/2204.12495v1
- Date: Tue, 26 Apr 2022 12:08:28 GMT
- Title: Enhancing Privacy against Inversion Attacks in Federated Learning by
using Mixing Gradients Strategies
- Authors: Shaltiel Eloul, Fran Silavong, Sanket Kamthe, Antonios Georgiadis,
Sean J. Moran
- Abstract summary: Federated learning reduces the risk of information leakage, but remains vulnerable to attacks.
We show how several neural network design decisions can defend against gradients inversion attacks.
These strategies are also shown to be useful for deep convolutional neural networks such as LeNET for image recognition.
- Score: 0.31498833540989407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning reduces the risk of information leakage, but remains
vulnerable to attacks. We investigate how several neural network design
decisions can defend against gradients inversion attacks. We show that
overlapping gradients provides numerical resistance to gradient inversion on
the highly vulnerable dense layer. Specifically, we propose to leverage
batching to maximise mixing of gradients by choosing an appropriate loss
function and drawing identical labels. We show that otherwise it is possible to
directly recover all vectors in a mini-batch without any numerical optimisation
due to the de-mixing nature of the cross entropy loss. To accurately assess
data recovery, we introduce an absolute variation distance (AVD) metric for
information leakage in images, derived from total variation. In contrast to
standard metrics, e.g. Mean Squared Error or Structural Similarity Index, AVD
offers a continuous metric for extracting information in noisy images. Finally,
our empirical results on information recovery from various inversion attacks
and training performance supports our defense strategies. These strategies are
also shown to be useful for deep convolutional neural networks such as LeNET
for image recognition. We hope that this study will help guide the development
of further strategies that achieve a trustful federation policy.
Related papers
- GI-SMN: Gradient Inversion Attack against Federated Learning without Prior Knowledge [4.839514405631815]
Federated learning (FL) has emerged as a privacy-preserving machine learning approach.
gradient inversion attacks can exploit the gradients of FL to recreate the original user data.
We propose a novel Gradient Inversion attack based on Style Migration Network (GI-SMN)
arXiv Detail & Related papers (2024-05-06T14:29:24Z) - MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge
Detection on Federated Learning [6.721419921063687]
We present a novel gradient inversion strategy based on canny edge detection (MGIC) in both the multi-label and single-label datasets.
Our proposed strategy has better visual inversion image results than the most widely used ones, saving more than 78% of time costs in the ImageNet dataset.
arXiv Detail & Related papers (2024-03-13T06:34:49Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Adversarial Unlearning: Reducing Confidence Along Adversarial Directions [88.46039795134993]
We propose a complementary regularization strategy that reduces confidence on self-generated examples.
The method, which we call RCAD, aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss.
Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques to increase test accuracy by 1-3% in absolute value.
arXiv Detail & Related papers (2022-06-03T02:26:24Z) - Get your Foes Fooled: Proximal Gradient Split Learning for Defense
against Model Inversion Attacks on IoMT data [5.582293277542012]
In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks.
We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance.
arXiv Detail & Related papers (2022-01-12T17:01:19Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - R-GAP: Recursive Gradient Attack on Privacy [5.687523225718642]
Federated learning is a promising approach to break the dilemma between demands on privacy and the promise of learning from large collections of distributed data.
We provide a closed-form recursion procedure to recover data from gradients in deep neural networks.
We also propose a Rank Analysis method to estimate the risk of gradient attacks inherent in certain network architectures.
arXiv Detail & Related papers (2020-10-15T13:22:40Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Unbiased Risk Estimators Can Mislead: A Case Study of Learning with
Complementary Labels [92.98756432746482]
We study a weakly supervised problem called learning with complementary labels.
We show that the quality of gradient estimation matters more in risk minimization.
We propose a novel surrogate complementary loss(SCL) framework that trades zero bias with reduced variance.
arXiv Detail & Related papers (2020-07-05T04:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.