Enhancing Facial Privacy Protection via Weakening Diffusion Purification
- URL: http://arxiv.org/abs/2503.10350v1
- Date: Thu, 13 Mar 2025 13:27:53 GMT
- Title: Enhancing Facial Privacy Protection via Weakening Diffusion Purification
- Authors: Ali Salar, Qing Liu, Yingli Tian, Guoying Zhao,
- Abstract summary: Social media has led to the widespread sharing of individual portrait images, which pose serious privacy risks.<n>Recent methods employ diffusion models to generate adversarial face images for privacy protection.<n>We propose learning unconditional embeddings to increase the learning capacity for adversarial modifications.<n>We integrate an identity-preserving structure to maintain structural consistency between the original and generated images.
- Score: 36.33027625681024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid growth of social media has led to the widespread sharing of individual portrait images, which pose serious privacy risks due to the capabilities of automatic face recognition (AFR) systems for mass surveillance. Hence, protecting facial privacy against unauthorized AFR systems is essential. Inspired by the generation capability of the emerging diffusion models, recent methods employ diffusion models to generate adversarial face images for privacy protection. However, they suffer from the diffusion purification effect, leading to a low protection success rate (PSR). In this paper, we first propose learning unconditional embeddings to increase the learning capacity for adversarial modifications and then use them to guide the modification of the adversarial latent code to weaken the diffusion purification effect. Moreover, we integrate an identity-preserving structure to maintain structural consistency between the original and generated images, allowing human observers to recognize the generated image as having the same identity as the original. Extensive experiments conducted on two public datasets, i.e., CelebA-HQ and LADN, demonstrate the superiority of our approach. The protected faces generated by our method outperform those produced by existing facial privacy protection approaches in terms of transferability and natural appearance.
Related papers
- Diffusion-based Adversarial Identity Manipulation for Facial Privacy Protection [14.797807196805607]
Face recognition has led to serious privacy concerns due to potential unauthorized surveillance and user tracking on social networks.
Existing methods for enhancing privacy fail to generate natural face images that can protect facial privacy.
We propose DiffAIM to generate natural and highly transferable adversarial faces against malicious FR systems.
arXiv Detail & Related papers (2025-04-30T13:49:59Z) - Diffusion-Driven Universal Model Inversion Attack for Face Recognition [17.70133779192382]
Face recognition systems convert raw images into embeddings, traditionally considered privacy-preserving.
Model inversion attacks pose a significant privacy threat by reconstructing private facial images.
We propose DiffUMI, a training-free diffusion-driven universal model inversion attack for face recognition systems.
arXiv Detail & Related papers (2025-04-25T01:53:27Z) - Transferable Adversarial Facial Images for Privacy Protection [15.211743719312613]
We present a novel face privacy protection scheme with improved transferability while maintain high visual quality.
We first exploit global adversarial latent search to traverse the latent space of the generative model.
We then introduce a key landmark regularization module to preserve the visual identity information.
arXiv Detail & Related papers (2024-07-18T02:16:11Z) - DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.