DiffUMI: Training-Free Universal Model Inversion via Unconditional Diffusion for Face Recognition
- URL: http://arxiv.org/abs/2504.18015v2
- Date: Thu, 12 Jun 2025 03:15:16 GMT
- Title: DiffUMI: Training-Free Universal Model Inversion via Unconditional Diffusion for Face Recognition
- Authors: Hanrui Wang, Shuo Wang, Chun-Shien Lu, Isao Echizen,
- Abstract summary: We introduce DiffUMI, a diffusion-based universal model inversion attack that requires no additional training.<n>It surpasses state-of-the-art attacks by 15.5% and 9.82% in success rate on standard and privacy-preserving face recognition systems, respectively.
- Score: 17.70133779192382
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Face recognition technology presents serious privacy risks due to its reliance on sensitive and immutable biometric data. To address these concerns, such systems typically convert raw facial images into embeddings, which are traditionally viewed as privacy-preserving. However, model inversion attacks challenge this assumption by reconstructing private facial images from embeddings, highlighting a critical vulnerability in face recognition systems. Most existing inversion methods require training a separate generator for each target model, making them computationally intensive. In this work, we introduce DiffUMI, a diffusion-based universal model inversion attack that requires no additional training. DiffUMI is the first approach to successfully leverage unconditional face generation without relying on model-specific generators. It surpasses state-of-the-art attacks by 15.5% and 9.82% in success rate on standard and privacy-preserving face recognition systems, respectively. Furthermore, we propose a novel use of out-of-domain detection (OODD), demonstrating for the first time that model inversion can differentiate between facial and non-facial embeddings using only the embedding space.
Related papers
- Enhancing Facial Privacy Protection via Weakening Diffusion Purification [36.33027625681024]
Social media has led to the widespread sharing of individual portrait images, which pose serious privacy risks.<n>Recent methods employ diffusion models to generate adversarial face images for privacy protection.<n>We propose learning unconditional embeddings to increase the learning capacity for adversarial modifications.<n>We integrate an identity-preserving structure to maintain structural consistency between the original and generated images.
arXiv Detail & Related papers (2025-03-13T13:27:53Z) - iFADIT: Invertible Face Anonymization via Disentangled Identity Transform [51.123936665445356]
Face anonymization aims to conceal the visual identity of a face to safeguard the individual's privacy.<n>This paper proposes a novel framework named iFADIT, an acronym for Invertible Face Anonymization via Disentangled Identity Transform.
arXiv Detail & Related papers (2025-01-08T10:08:09Z) - Local Features Meet Stochastic Anonymization: Revolutionizing Privacy-Preserving Face Recognition for Black-Box Models [54.88064975480573]
The task of privacy-preserving face recognition (PPFR) currently faces two major unsolved challenges.<n>By disrupting global features while enhancing local features, we achieve effective recognition even in black-box environments.<n>Our method achieves an average recognition accuracy of 94.21% on black-box models, outperforming existing methods in both privacy protection and anti-reconstruction capabilities.
arXiv Detail & Related papers (2024-12-11T10:49:15Z) - OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.<n>We propose OSDFace, a novel one-step diffusion model for face restoration.<n>Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Transferable Adversarial Facial Images for Privacy Protection [15.211743719312613]
We present a novel face privacy protection scheme with improved transferability while maintain high visual quality.
We first exploit global adversarial latent search to traverse the latent space of the generative model.
We then introduce a key landmark regularization module to preserve the visual identity information.
arXiv Detail & Related papers (2024-07-18T02:16:11Z) - Privacy-Preserving Face Recognition in Hybrid Frequency-Color Domain [16.05230409730324]
Face image is a sensitive biometric attribute tied to the identity information of each user.
This paper proposes a hybrid frequency-color fusion approach to reduce the input dimensionality of face recognition.
It has around 2.6% to 4.2% higher accuracy than the state-of-the-art in the 1:N verification scenario.
arXiv Detail & Related papers (2024-01-24T11:27:32Z) - Generalized Face Liveness Detection via De-fake Face Generator [52.23271636362843]
Previous Face Anti-spoofing (FAS) methods face the challenge of generalizing to unseen domains.<n>We propose an Anomalous cue Guided FAS (AG-FAS) method, which can effectively leverage large-scale additional real faces.<n>Our method achieves state-of-the-art results under cross-domain evaluations with unseen scenarios and unknown presentation attacks.
arXiv Detail & Related papers (2024-01-17T06:59:32Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Controllable Inversion of Black-Box Face Recognition Models via
Diffusion [8.620807177029892]
We tackle the task of inverting the latent space of pre-trained face recognition models without full model access.
We show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution.
Our method is the first black-box face recognition model inversion method that offers intuitive control over the generation process.
arXiv Detail & Related papers (2023-03-23T03:02:09Z) - Improving Transferability of Adversarial Patches on Face Recognition
with Generative Models [43.51625789744288]
We evaluate the robustness of face recognition models using adversarial patches based on transferability.
We show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability.
arXiv Detail & Related papers (2021-06-29T02:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.