Attribute-Guided Encryption with Facial Texture Masking
- URL: http://arxiv.org/abs/2305.13548v1
- Date: Mon, 22 May 2023 23:50:43 GMT
- Title: Attribute-Guided Encryption with Facial Texture Masking
- Authors: Chun Pong Lau, Jiang Liu, Rama Chellappa
- Abstract summary: We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
- Score: 64.77548539959501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasingly pervasive facial recognition (FR) systems raise serious
concerns about personal privacy, especially for billions of users who have
publicly shared their photos on social media. Several attempts have been made
to protect individuals from unauthorized FR systems utilizing adversarial
attacks to generate encrypted face images to protect users from being
identified by FR systems. However, existing methods suffer from poor visual
quality or low attack success rates, which limit their usability in practice.
In this paper, we propose Attribute Guided Encryption with Facial Texture
Masking (AGE-FTM) that performs a dual manifold adversarial attack on FR
systems to achieve both good visual quality and high black box attack success
rates. In particular, AGE-FTM utilizes a high fidelity generative adversarial
network (GAN) to generate natural on-manifold adversarial samples by modifying
facial attributes, and performs the facial texture masking attack to generate
imperceptible off-manifold adversarial samples. Extensive experiments on the
CelebA-HQ dataset demonstrate that our proposed method produces more
natural-looking encrypted images than state-of-the-art methods while achieving
competitive attack performance. We further evaluate the effectiveness of
AGE-FTM in the real world using a commercial FR API and validate its usefulness
in practice through an user study.
Related papers
- Transferable Adversarial Facial Images for Privacy Protection [15.211743719312613]
We present a novel face privacy protection scheme with improved transferability while maintain high visual quality.
We first exploit global adversarial latent search to traverse the latent space of the generative model.
We then introduce a key landmark regularization module to preserve the visual identity information.
arXiv Detail & Related papers (2024-07-18T02:16:11Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Face Encryption via Frequency-Restricted Identity-Agnostic Attacks [25.198662208981467]
Malicious collectors use deep face recognition systems to easily steal biometric information.
We propose a frequency-restricted identity-agnostic (FRIA) framework to encrypt face images from unauthorized face recognition.
arXiv Detail & Related papers (2023-08-11T07:38:46Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Protecting Facial Privacy: Generating Adversarial Identity Masks via
Style-robust Makeup Transfer [24.25863892897547]
adversarial makeup transfer GAN (AMT-GAN) is a novel face protection method aiming at constructing adversarial face images.
In this paper, we introduce a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer.
arXiv Detail & Related papers (2022-03-07T03:56:17Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.