Exploiting Defenses against GAN-Based Feature Inference Attacks in
Federated Learning
- URL: http://arxiv.org/abs/2004.12571v2
- Date: Thu, 19 Aug 2021 09:22:30 GMT
- Title: Exploiting Defenses against GAN-Based Feature Inference Attacks in
Federated Learning
- Authors: Xianglong Zhang and Xinjian Luo
- Abstract summary: We exploit defenses against GAN-based attacks in federated learning.
We propose a framework, Anti-GAN, to prevent attackers from learning the real distribution of the victim's data.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a decentralized model training method, federated learning is designed to
integrate the isolated data islands and protect data privacy. Recent studies,
however, have demonstrated that the Generative Adversarial Network (GAN) based
attacks can be used in federated learning to learn the distribution of the
victim's private dataset and accordingly reconstruct human-distinguishable
images. In this paper, we exploit defenses against GAN-based attacks in
federated learning, and propose a framework, Anti-GAN, to prevent attackers
from learning the real distribution of the victim's data. The core idea of
Anti-GAN is to corrupt the visual features of the victim's private training
images, such that the images restored by the attacker are indistinguishable to
human eyes. Specifically, in Anti-GAN, the victim first projects the personal
dataset onto a GAN's generator, then mixes the fake images generated by the
generator with the real images to obtain the training dataset, which will be
fed into the federated model for training. We redesign the structure of the
victim's GAN to encourage it to learn the classification features (instead of
the visual features) of the real images. We further introduce an unsupervised
task to the GAN model for obfuscating the visual features of the generated
images. The experiments demonstrate that Anti-GAN can effectively prevent the
attacker from learning the distribution of the private images, meanwhile
causing little harm to the accuracy of the federated model.
Related papers
- Deceptive Diffusion: Generating Synthetic Adversarial Examples [2.7309692684728617]
We introduce the concept of deceptive diffusion -- training a generative AI model to produce adversarial images.
A traditional adversarial attack algorithm aims to perturb an existing image to induce a misclassificaton.
The deceptive diffusion model can create an arbitrary number of new, misclassified images that are not directly associated with training or test images.
arXiv Detail & Related papers (2024-06-28T10:30:46Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning [1.9632700283749582]
This paper introduces a novel defense mechanism against backdoor attacks in federated learning, named GANcrop.
Experimental findings demonstrate that GANcrop effectively safeguards against backdoor attacks, particularly in non-IID scenarios.
arXiv Detail & Related papers (2024-05-31T09:33:16Z) - Black-Box Training Data Identification in GANs via Detector Networks [2.4554686192257424]
We study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if it is possible for an attacker to efficiently identify if a given point is a member of the GAN's training data.
This is of interest for both reasons related to copyright, where a user may want to determine if their copyrighted data has been used to train a GAN, and in the study of data privacy, where the ability to detect training set membership is known as a membership inference attack.
We introduce a suite of membership inference attacks against GANs in the black-box setting and evaluate our attacks
arXiv Detail & Related papers (2023-10-18T15:53:20Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Feature Unlearning for Pre-trained GANs and VAEs [4.80598572788967]
We tackle the problem of feature unlearning from a pre-trained image generative model: GANs and VAEs.
We aim to unlearn a specific feature, such as hairstyle from facial images, from the pre-trained generative models.
arXiv Detail & Related papers (2023-03-10T04:49:01Z) - Backdoor Attack is A Devil in Federated GAN-based Medical Image
Synthesis [15.41200827860072]
We propose a way of attacking federated GAN (FedGAN) by treating the discriminator with a commonly used data poisoning strategy in backdoor attack classification models.
We provide two effective defense strategies: global malicious detection and local training regularization.
arXiv Detail & Related papers (2022-07-02T07:20:35Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.