3D-Aware Adversarial Makeup Generation for Facial Privacy Protection
- URL: http://arxiv.org/abs/2306.14640v1
- Date: Mon, 26 Jun 2023 12:27:59 GMT
- Title: 3D-Aware Adversarial Makeup Generation for Facial Privacy Protection
- Authors: Yueming Lyu and Yue Jiang and Ziwen He and Bo Peng and Yunfan Liu and
Jing Dong
- Abstract summary: 3D-Aware Adversarial Makeup Generation GAN (3DAM-GAN)
A UV-based generator consisting of a novel Makeup Adjustment Module (MAM) and Makeup Transfer Module (MTM) is designed to render realistic and robust makeup.
Experiment results on several benchmark datasets demonstrate that 3DAM-GAN could effectively protect faces against various FR models.
- Score: 23.915259014651337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy and security of face data on social media are facing
unprecedented challenges as it is vulnerable to unauthorized access and
identification. A common practice for solving this problem is to modify the
original data so that it could be protected from being recognized by malicious
face recognition (FR) systems. However, such ``adversarial examples'' obtained
by existing methods usually suffer from low transferability and poor image
quality, which severely limits the application of these methods in real-world
scenarios. In this paper, we propose a 3D-Aware Adversarial Makeup Generation
GAN (3DAM-GAN). which aims to improve the quality and transferability of
synthetic makeup for identity information concealing. Specifically, a UV-based
generator consisting of a novel Makeup Adjustment Module (MAM) and Makeup
Transfer Module (MTM) is designed to render realistic and robust makeup with
the aid of symmetric characteristics of human faces. Moreover, a makeup attack
mechanism with an ensemble training strategy is proposed to boost the
transferability of black-box models. Extensive experiment results on several
benchmark datasets demonstrate that 3DAM-GAN could effectively protect faces
against various FR models, including both publicly available state-of-the-art
models and commercial face verification APIs, such as Face++, Baidu and Aliyun.
Related papers
- iFADIT: Invertible Face Anonymization via Disentangled Identity Transform [51.123936665445356]
Face anonymization aims to conceal the visual identity of a face to safeguard the individual's privacy.
This paper proposes a novel framework named iFADIT, an acronym for Invertible Face Anonymization via Disentangled Identity Transform.
arXiv Detail & Related papers (2025-01-08T10:08:09Z) - ErasableMask: A Robust and Erasable Privacy Protection Scheme against Black-box Face Recognition Models [14.144010156851273]
We propose ErasableMask, a robust and erasable privacy protection scheme against black-box FR models.
Specifically, ErasableMask introduces a novel meta-auxiliary attack, which boosts black-box transferability.
It also offers a perturbation erasion mechanism that supports the erasion of semantic perturbations in protected face without degrading image quality.
arXiv Detail & Related papers (2024-12-22T14:30:26Z) - Transferable Adversarial Facial Images for Privacy Protection [15.211743719312613]
We present a novel face privacy protection scheme with improved transferability while maintain high visual quality.
We first exploit global adversarial latent search to traverse the latent space of the generative model.
We then introduce a key landmark regularization module to preserve the visual identity information.
arXiv Detail & Related papers (2024-07-18T02:16:11Z) - DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Protecting Facial Privacy: Generating Adversarial Identity Masks via
Style-robust Makeup Transfer [24.25863892897547]
adversarial makeup transfer GAN (AMT-GAN) is a novel face protection method aiming at constructing adversarial face images.
In this paper, we introduce a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer.
arXiv Detail & Related papers (2022-03-07T03:56:17Z) - GMFIM: A Generative Mask-guided Facial Image Manipulation Model for
Privacy Preservation [0.7734726150561088]
We propose a Generative Mask-guided Face Image Manipulation model based on GANs to apply imperceptible editing to the input face image.
Our model can achieve better performance against automated face recognition systems in comparison to the state-of-the-art methods.
arXiv Detail & Related papers (2022-01-10T14:09:14Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.