Exploiting Defenses against GAN-Based Feature Inference Attacks in
Federated Learning
- URL: http://arxiv.org/abs/2004.12571v2
- Date: Thu, 19 Aug 2021 09:22:30 GMT
- Title: Exploiting Defenses against GAN-Based Feature Inference Attacks in
Federated Learning
- Authors: Xianglong Zhang and Xinjian Luo
- Abstract summary: We exploit defenses against GAN-based attacks in federated learning.
We propose a framework, Anti-GAN, to prevent attackers from learning the real distribution of the victim's data.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a decentralized model training method, federated learning is designed to
integrate the isolated data islands and protect data privacy. Recent studies,
however, have demonstrated that the Generative Adversarial Network (GAN) based
attacks can be used in federated learning to learn the distribution of the
victim's private dataset and accordingly reconstruct human-distinguishable
images. In this paper, we exploit defenses against GAN-based attacks in
federated learning, and propose a framework, Anti-GAN, to prevent attackers
from learning the real distribution of the victim's data. The core idea of
Anti-GAN is to corrupt the visual features of the victim's private training
images, such that the images restored by the attacker are indistinguishable to
human eyes. Specifically, in Anti-GAN, the victim first projects the personal
dataset onto a GAN's generator, then mixes the fake images generated by the
generator with the real images to obtain the training dataset, which will be
fed into the federated model for training. We redesign the structure of the
victim's GAN to encourage it to learn the classification features (instead of
the visual features) of the real images. We further introduce an unsupervised
task to the GAN model for obfuscating the visual features of the generated
images. The experiments demonstrate that Anti-GAN can effectively prevent the
attacker from learning the distribution of the private images, meanwhile
causing little harm to the accuracy of the federated model.
Related papers
- Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.
We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning [1.9632700283749582]
This paper introduces a novel defense mechanism against backdoor attacks in federated learning, named GANcrop.
Experimental findings demonstrate that GANcrop effectively safeguards against backdoor attacks, particularly in non-IID scenarios.
arXiv Detail & Related papers (2024-05-31T09:33:16Z) - PPIDSG: A Privacy-Preserving Image Distribution Sharing Scheme with GAN
in Federated Learning [2.0507547735926424]
Federated learning (FL) has attracted growing attention since it allows for privacy-preserving collaborative training on decentralized clients.
Recent works have revealed that it still has the risk of exposing private data to adversaries.
We propose a privacy-preserving image distribution sharing scheme with GAN (PPIDSG)
arXiv Detail & Related papers (2023-12-16T08:32:29Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Black-Box Training Data Identification in GANs via Detector Networks [2.4554686192257424]
We study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if it is possible for an attacker to efficiently identify if a given point is a member of the GAN's training data.
This is of interest for both reasons related to copyright, where a user may want to determine if their copyrighted data has been used to train a GAN, and in the study of data privacy, where the ability to detect training set membership is known as a membership inference attack.
We introduce a suite of membership inference attacks against GANs in the black-box setting and evaluate our attacks
arXiv Detail & Related papers (2023-10-18T15:53:20Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Backdoor Attack and Defense in Federated Generative Adversarial
Network-based Medical Image Synthesis [15.41200827860072]
Federated learning (FL) provides a way of training a central model using distributed data while keeping raw data locally.
It is vulnerable to backdoor attacks, an adversarial by poisoning training data.
Most backdoor attack strategies focus on classification models and centralized domains.
We propose FedDetect, an efficient and effective way of defending against the backdoor attack in the FL setting.
arXiv Detail & Related papers (2022-10-19T21:03:34Z) - Backdoor Attack is A Devil in Federated GAN-based Medical Image
Synthesis [15.41200827860072]
We propose a way of attacking federated GAN (FedGAN) by treating the discriminator with a commonly used data poisoning strategy in backdoor attack classification models.
We provide two effective defense strategies: global malicious detection and local training regularization.
arXiv Detail & Related papers (2022-07-02T07:20:35Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.