This Person (Probably) Exists. Identity Membership Attacks Against GAN
Generated Faces
- URL: http://arxiv.org/abs/2107.06018v1
- Date: Tue, 13 Jul 2021 12:11:21 GMT
- Title: This Person (Probably) Exists. Identity Membership Attacks Against GAN
Generated Faces
- Authors: Ryan Webster and Julien Rabin and Loic Simon and Frederic Jurie
- Abstract summary: generative adversarial networks (GANs) have achieved stunning realism, fooling even human observers.
GANs do leak information about their training data, as evidenced by membership attacks recently demonstrated in the literature.
In this work, we challenge the assumption that GAN faces really are novel creations, by constructing a successful membership attack of a new kind.
- Score: 6.270305440413689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, generative adversarial networks (GANs) have achieved stunning
realism, fooling even human observers. Indeed, the popular tongue-in-cheek
website {\small \url{ http://thispersondoesnotexist.com}}, taunts users with
GAN generated images that seem too real to believe. On the other hand, GANs do
leak information about their training data, as evidenced by membership attacks
recently demonstrated in the literature. In this work, we challenge the
assumption that GAN faces really are novel creations, by constructing a
successful membership attack of a new kind. Unlike previous works, our attack
can accurately discern samples sharing the same identity as training samples
without being the same samples. We demonstrate the interest of our attack
across several popular face datasets and GAN training procedures. Notably, we
show that even in the presence of significant dataset diversity, an over
represented person can pose a privacy concern.
Related papers
- MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup Transfer [6.6251662169603005]
We propose a novel feature backdoor attack against face recognition via makeup transfer, dubbed MakeupAttack.
In our attack, we design an iterative training paradigm to learn the subtle features of the proposed makeup-style trigger.
The results demonstrate that our proposed attack method can bypass existing state-of-the-art defenses while maintaining effectiveness, robustness, naturalness, and stealthiness, without compromising model performance.
arXiv Detail & Related papers (2024-08-22T11:39:36Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Privacy Re-identification Attacks on Tabular GANs [0.0]
Generative models are subject to overfitting and thus may potentially leak sensitive information from the training data.
We investigate the privacy risks that can potentially arise from the use of generative adversarial networks (GANs) for creating synthetic datasets.
arXiv Detail & Related papers (2024-03-31T14:14:00Z) - Black-Box Training Data Identification in GANs via Detector Networks [2.4554686192257424]
We study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if it is possible for an attacker to efficiently identify if a given point is a member of the GAN's training data.
This is of interest for both reasons related to copyright, where a user may want to determine if their copyrighted data has been used to train a GAN, and in the study of data privacy, where the ability to detect training set membership is known as a membership inference attack.
We introduce a suite of membership inference attacks against GANs in the black-box setting and evaluate our attacks
arXiv Detail & Related papers (2023-10-18T15:53:20Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - An Ensemble Model for Face Liveness Detection [2.322052136673525]
We present a passive method to detect face presentation attack using an ensemble deep learning technique.
We propose an ensemble method where multiple features of the face and background regions are learned to predict whether the user is a bonafide or an attacker.
arXiv Detail & Related papers (2022-01-19T12:43:39Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces [36.87244915810356]
Deepfake represents a category of face-swapping attacks that leverage machine learning models.
We propose to use novel transformation-aware adversarially perturbed faces as a defense against Deepfake attacks.
We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants.
arXiv Detail & Related papers (2020-06-12T18:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.