A Key-Driven Framework for Identity-Preserving Face Anonymization
- URL: http://arxiv.org/abs/2409.03434v1
- Date: Thu, 5 Sep 2024 11:35:16 GMT
- Title: A Key-Driven Framework for Identity-Preserving Face Anonymization
- Authors: Miaomiao Wang, Guang Hua, Sheng Li, Guorui Feng,
- Abstract summary: We propose a key-driven face anonymization and authentication recognition (KFAAR) framework to address the conflict between privacy and identifiability in virtual faces.
The KFAAR framework consists of a head posture-preserving virtual face generation (HPVFG) module and a key-controllable virtual face authentication ( KVFA) module.
- Score: 23.464459834036035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual faces are crucial content in the metaverse. Recently, attempts have been made to generate virtual faces for privacy protection. Nevertheless, these virtual faces either permanently remove the identifiable information or map the original identity into a virtual one, which loses the original identity forever. In this study, we first attempt to address the conflict between privacy and identifiability in virtual faces, where a key-driven face anonymization and authentication recognition (KFAAR) framework is proposed. Concretely, the KFAAR framework consists of a head posture-preserving virtual face generation (HPVFG) module and a key-controllable virtual face authentication (KVFA) module. The HPVFG module uses a user key to project the latent vector of the original face into a virtual one. Then it maps the virtual vectors to obtain an extended encoding, based on which the virtual face is generated. By simultaneously adding a head posture and facial expression correction module, the virtual face has the same head posture and facial expression as the original face. During the authentication, we propose a KVFA module to directly recognize the virtual faces using the correct user key, which can obtain the original identity without exposing the original face image. We also propose a multi-task learning objective to train HPVFG and KVFA. Extensive experiments demonstrate the advantages of the proposed HPVFG and KVFA modules, which effectively achieve both facial anonymity and identifiability.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - VIGFace: Virtual Identity Generation Model for Face Image Synthesis [13.81887339529775]
We propose VIGFace, a novel framework capable of generating synthetic facial images.
It allows for creating virtual facial images without concerns about portrait rights.
It serves as an effective augmentation method by incorporating real existing images.
arXiv Detail & Related papers (2024-03-13T06:11:41Z) - Seeing is not Believing: An Identity Hider for Human Vision Privacy Protection [16.466136884030977]
We propose an effective identity hider for human vision protection.
It can significantly change appearance to visually hide identity while allowing identification for face recognizers.
The proposed identity hider achieves excellent performance on privacy protection and identifiability preservation.
arXiv Detail & Related papers (2023-07-02T05:48:19Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Graph-based Generative Face Anonymisation with Pose Preservation [49.18049578591058]
AnonyGAN is a GAN-based solution for face anonymisation.
It replaces the visual information corresponding to a source identity with a condition identity provided as any single image.
arXiv Detail & Related papers (2021-12-10T12:58:17Z) - On Generating Identifiable Virtual Faces [13.920942815539256]
Face anonymization with generative models have become increasingly prevalent since they sanitize private information.
In this paper, we formalize and tackle the problem of generating identifiable virtual face images.
We propose an Identifiable Virtual Face Generator (IVFG) to generate the virtual face images.
arXiv Detail & Related papers (2021-10-15T10:19:48Z) - Master Face Attacks on Face Recognition Systems [45.090037010778765]
Face authentication is now widely used, especially on mobile devices, rather than authentication using a personal identification number or an unlock pattern.
Previous work has proven the existence of master faces that match multiple enrolled templates in face recognition systems.
In this paper, we perform an extensive study on latent variable evolution (LVE), a method commonly used to generate master faces.
arXiv Detail & Related papers (2021-09-08T02:11:35Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.