Diffusion-Driven Universal Model Inversion Attack for Face Recognition
- URL: http://arxiv.org/abs/2504.18015v1
- Date: Fri, 25 Apr 2025 01:53:27 GMT
- Title: Diffusion-Driven Universal Model Inversion Attack for Face Recognition
- Authors: Hanrui Wang, Shuo Wang, Chun-Shien Lu, Isao Echizen,
- Abstract summary: Face recognition systems convert raw images into embeddings, traditionally considered privacy-preserving.<n>Model inversion attacks pose a significant privacy threat by reconstructing private facial images.<n>We propose DiffUMI, a training-free diffusion-driven universal model inversion attack for face recognition systems.
- Score: 17.70133779192382
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Facial recognition technology poses significant privacy risks, as it relies on biometric data that is inherently sensitive and immutable if compromised. To mitigate these concerns, face recognition systems convert raw images into embeddings, traditionally considered privacy-preserving. However, model inversion attacks pose a significant privacy threat by reconstructing these private facial images, making them a crucial tool for evaluating the privacy risks of face recognition systems. Existing methods usually require training individual generators for each target model, a computationally expensive process. In this paper, we propose DiffUMI, a training-free diffusion-driven universal model inversion attack for face recognition systems. DiffUMI is the first approach to apply a diffusion model for unconditional image generation in model inversion. Unlike other methods, DiffUMI is universal, eliminating the need for training target-specific generators. It operates within a fixed framework and pretrained diffusion model while seamlessly adapting to diverse target identities and models. DiffUMI breaches privacy-preserving face recognition systems with state-of-the-art success, demonstrating that an unconditional diffusion model, coupled with optimized adversarial search, enables efficient and high-fidelity facial reconstruction. Additionally, we introduce a novel application of out-of-domain detection (OODD), marking the first use of model inversion to distinguish non-face inputs from face inputs based solely on embeddings.
Related papers
- Enhancing Facial Privacy Protection via Weakening Diffusion Purification [36.33027625681024]
Social media has led to the widespread sharing of individual portrait images, which pose serious privacy risks.<n>Recent methods employ diffusion models to generate adversarial face images for privacy protection.<n>We propose learning unconditional embeddings to increase the learning capacity for adversarial modifications.<n>We integrate an identity-preserving structure to maintain structural consistency between the original and generated images.
arXiv Detail & Related papers (2025-03-13T13:27:53Z) - iFADIT: Invertible Face Anonymization via Disentangled Identity Transform [51.123936665445356]
Face anonymization aims to conceal the visual identity of a face to safeguard the individual's privacy.<n>This paper proposes a novel framework named iFADIT, an acronym for Invertible Face Anonymization via Disentangled Identity Transform.
arXiv Detail & Related papers (2025-01-08T10:08:09Z) - Local Features Meet Stochastic Anonymization: Revolutionizing Privacy-Preserving Face Recognition for Black-Box Models [54.88064975480573]
The task of privacy-preserving face recognition (PPFR) currently faces two major unsolved challenges.<n>By disrupting global features while enhancing local features, we achieve effective recognition even in black-box environments.<n>Our method achieves an average recognition accuracy of 94.21% on black-box models, outperforming existing methods in both privacy protection and anti-reconstruction capabilities.
arXiv Detail & Related papers (2024-12-11T10:49:15Z) - OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.<n>We propose OSDFace, a novel one-step diffusion model for face restoration.<n>Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Privacy-Preserving Face Recognition in Hybrid Frequency-Color Domain [16.05230409730324]
Face image is a sensitive biometric attribute tied to the identity information of each user.
This paper proposes a hybrid frequency-color fusion approach to reduce the input dimensionality of face recognition.
It has around 2.6% to 4.2% higher accuracy than the state-of-the-art in the 1:N verification scenario.
arXiv Detail & Related papers (2024-01-24T11:27:32Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Controllable Inversion of Black-Box Face Recognition Models via
Diffusion [8.620807177029892]
We tackle the task of inverting the latent space of pre-trained face recognition models without full model access.
We show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution.
Our method is the first black-box face recognition model inversion method that offers intuitive control over the generation process.
arXiv Detail & Related papers (2023-03-23T03:02:09Z) - Improving Transferability of Adversarial Patches on Face Recognition
with Generative Models [43.51625789744288]
We evaluate the robustness of face recognition models using adversarial patches based on transferability.
We show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability.
arXiv Detail & Related papers (2021-06-29T02:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.