GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning
- URL: http://arxiv.org/abs/2105.00529v1
- Date: Sun, 2 May 2021 18:39:37 GMT
- Title: GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning
- Authors: Hanchi Ren, Jingjing Deng and Xianghua Xie
- Abstract summary: We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
- Score: 3.050919759387984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data privacy has become an increasingly important issue in machine learning.
Many approaches have been developed to tackle this issue, e.g., cryptography
(Homomorphic Encryption, Differential Privacy, etc.) and collaborative training
(Secure Multi-Party Computation, Distributed Learning and Federated Learning).
These techniques have a particular focus on data encryption or secure local
computation. They transfer the intermediate information to the third-party to
compute the final result. Gradient exchanging is commonly considered to be a
secure way of training a robust model collaboratively in deep learning.
However, recent researches have demonstrated that sensitive information can be
recovered from the shared gradient. Generative Adversarial Networks (GAN), in
particular, have shown to be effective in recovering those information.
However, GAN based techniques require additional information, such as class
labels which are generally unavailable for privacy persevered learning. In this
paper, we show that, in Federated Learning (FL) system, image-based privacy
data can be easily recovered in full from the shared gradient only via our
proposed Generative Regression Neural Network (GRNN). We formulate the attack
to be a regression problem and optimise two branches of the generative model by
minimising the distance between gradients. We evaluate our method on several
image classification tasks. The results illustrate that our proposed GRNN
outperforms state-of-the-art methods with better stability, stronger
robustness, and higher accuracy. It also has no convergence requirement to the
global FL model. Moreover, we demonstrate information leakage using face
re-identification. Some defense strategies are also discussed in this work.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Sequential Graph Neural Networks for Source Code Vulnerability
Identification [5.582101184758527]
We present a properly curated C/C++ source code vulnerability dataset to aid in developing models.
We also propose a learning framework based on graph neural networks, denoted SEquential Graph Neural Network (SEGNN) for learning a large number of code semantic representations.
Our evaluations on two datasets and four baseline methods in a graph classification setting demonstrate state-of-the-art results.
arXiv Detail & Related papers (2023-05-23T17:25:51Z) - SPIN: Simulated Poisoning and Inversion Network for Federated
Learning-Based 6G Vehicular Networks [9.494669823390648]
Vehicular networks have always faced data privacy preservation concerns.
The technique is quite vulnerable to model inversion and model poisoning attacks.
We propose simulated poisoning and inversion network (SPIN) that leverages the optimization approach for reconstructing data.
arXiv Detail & Related papers (2022-11-21T10:07:13Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Enhancing Privacy against Inversion Attacks in Federated Learning by
using Mixing Gradients Strategies [0.31498833540989407]
Federated learning reduces the risk of information leakage, but remains vulnerable to attacks.
We show how several neural network design decisions can defend against gradients inversion attacks.
These strategies are also shown to be useful for deep convolutional neural networks such as LeNET for image recognition.
arXiv Detail & Related papers (2022-04-26T12:08:28Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model [8.87104231451079]
This paper presents the first gradient-free federated learning framework called GRAFFL.
It uses implicit information derived from each participating institution to learn posterior distributions of parameters.
We propose the GRAFFL-based Bayesian mixture model to serve as a proof-of-concept of the framework.
arXiv Detail & Related papers (2020-08-29T07:19:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.