Controllable Inversion of Black-Box Face Recognition Models via
Diffusion
- URL: http://arxiv.org/abs/2303.13006v2
- Date: Sat, 30 Sep 2023 15:29:50 GMT
- Title: Controllable Inversion of Black-Box Face Recognition Models via
Diffusion
- Authors: Manuel Kansy, Anton Ra\"el, Graziana Mignone, Jacek Naruniec,
Christopher Schroers, Markus Gross, Romann M. Weber
- Abstract summary: We tackle the task of inverting the latent space of pre-trained face recognition models without full model access.
We show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution.
Our method is the first black-box face recognition model inversion method that offers intuitive control over the generation process.
- Score: 8.620807177029892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition models embed a face image into a low-dimensional identity
vector containing abstract encodings of identity-specific facial features that
allow individuals to be distinguished from one another. We tackle the
challenging task of inverting the latent space of pre-trained face recognition
models without full model access (i.e. black-box setting). A variety of methods
have been proposed in literature for this task, but they have serious
shortcomings such as a lack of realistic outputs and strong requirements for
the data set and accessibility of the face recognition model. By analyzing the
black-box inversion problem, we show that the conditional diffusion model loss
naturally emerges and that we can effectively sample from the inverse
distribution even without an identity-specific loss. Our method, named identity
denoising diffusion probabilistic model (ID3PM), leverages the stochastic
nature of the denoising diffusion process to produce high-quality,
identity-preserving face images with various backgrounds, lighting, poses, and
expressions. We demonstrate state-of-the-art performance in terms of identity
preservation and diversity both qualitatively and quantitatively, and our
method is the first black-box face recognition model inversion method that
offers intuitive control over the generation process.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Face Anonymization Made Simple [44.24233169815565]
Current face anonymization techniques often depend on identity loss calculated by face recognition models, which can be inaccurate and unreliable.
In contrast, our approach uses diffusion models with only a reconstruction loss, eliminating the need for facial landmarks or masks.
Our model achieves state-of-the-art performance in three key areas: identity anonymization, facial preservation, and image quality.
arXiv Detail & Related papers (2024-11-01T17:45:21Z) - ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - DiffusionFace: Towards a Comprehensive Dataset for Diffusion-Based Face Forgery Analysis [71.40724659748787]
DiffusionFace is the first diffusion-based face forgery dataset.
It covers various forgery categories, including unconditional and Text Guide facial image generation, Img2Img, Inpaint, and Diffusion-based facial exchange algorithms.
It provides essential metadata and a real-world internet-sourced forgery facial image dataset for evaluation.
arXiv Detail & Related papers (2024-03-27T11:32:44Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved
Personalization [92.90392834835751]
PortraitBooth is designed for high efficiency, robust identity preservation, and expression-editable text-to-image generation.
PortraitBooth eliminates computational overhead and mitigates identity distortion.
It incorporates emotion-aware cross-attention control for diverse facial expressions in generated images.
arXiv Detail & Related papers (2023-12-11T13:03:29Z) - Conditioning Diffusion Models via Attributes and Semantic Masks for Face
Generation [1.104121146441257]
Deep generative models have shown impressive results in generating realistic images of faces.
GANs managed to generate high-quality, high-fidelity images when conditioned on semantic masks, but they still lack the ability to diversify their output.
We propose a multi-conditioning approach for diffusion models via cross-attention exploiting both attributes and semantic masks to generate high-quality and controllable face images.
arXiv Detail & Related papers (2023-06-01T17:16:37Z) - DiffFace: Diffusion-based Face Swapping with Facial Guidance [24.50570533781642]
We propose a diffusion-based face swapping framework for the first time, called DiffFace.
It is composed of training ID conditional DDPM, sampling with facial guidance, and a target-preserving blending.
DiffFace achieves better benefits such as training stability, high fidelity, diversity of the samples, and controllability.
arXiv Detail & Related papers (2022-12-27T02:51:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.