Face Reconstruction from Face Embeddings using Adapter to a Face Foundation Model
- URL: http://arxiv.org/abs/2411.03960v1
- Date: Wed, 06 Nov 2024 14:45:41 GMT
- Title: Face Reconstruction from Face Embeddings using Adapter to a Face Foundation Model
- Authors: Hatef Otroshi Shahreza, Anjith George, Sébastien Marcel,
- Abstract summary: Face recognition systems extract embedding vectors from face images and use these embeddings to verify or identify individuals.
Face reconstruction attack (also known as template inversion) refers to reconstructing face images from face embeddings and using the reconstructed face image to enter a face recognition system.
We propose to use a face foundation model to reconstruct face images from the embeddings of a blackbox face recognition model.
- Score: 24.72209930285057
- License:
- Abstract: Face recognition systems extract embedding vectors from face images and use these embeddings to verify or identify individuals. Face reconstruction attack (also known as template inversion) refers to reconstructing face images from face embeddings and using the reconstructed face image to enter a face recognition system. In this paper, we propose to use a face foundation model to reconstruct face images from the embeddings of a blackbox face recognition model. The foundation model is trained with 42M images to generate face images from the facial embeddings of a fixed face recognition model. We propose to use an adapter to translate target embeddings into the embedding space of the foundation model. The generated images are evaluated on different face recognition models and different datasets, demonstrating the effectiveness of our method to translate embeddings of different face recognition models. We also evaluate the transferability of reconstructed face images when attacking different face recognition models. Our experimental results show that our reconstructed face images outperform previous reconstruction attacks against face recognition models.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Learning Representations for Masked Facial Recovery [8.124282476398843]
pandemic of these recent years has led to a dramatic increase in people wearing protective masks in public venues.
One way to address the problem is to revert to face recovery methods as a preprocessing step.
We introduce a method that is specific for the recovery of the face image from an image of the same individual wearing a mask.
arXiv Detail & Related papers (2022-12-28T22:22:15Z) - Analyzing the Impact of Shape & Context on the Face Recognition
Performance of Deep Networks [2.0099255688059907]
We analyze how changing the underlying 3D shape of the base identity in face images can distort their overall appearance.
Our experiments demonstrate the significance of facial shape in accurate face matching and underpin the importance of contextual data for network training.
arXiv Detail & Related papers (2022-08-05T05:32:07Z) - FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders [81.21440457805932]
We propose a novel framework FaceMAE, where the face privacy and recognition performance are considered simultaneously.
randomly masked face images are used to train the reconstruction module in FaceMAE.
We also perform sufficient privacy-preserving face recognition on several public face datasets.
arXiv Detail & Related papers (2022-05-23T07:19:42Z) - Graph-based Generative Face Anonymisation with Pose Preservation [49.18049578591058]
AnonyGAN is a GAN-based solution for face anonymisation.
It replaces the visual information corresponding to a source identity with a condition identity provided as any single image.
arXiv Detail & Related papers (2021-12-10T12:58:17Z) - MLFW: A Database for Face Recognition on Masked Faces [56.441078419992046]
Masked LFW (MLFW) is a tool to generate masked faces from unmasked faces automatically.
The recognition accuracy of SOTA models declines 5%-16% on MLFW database compared with the accuracy on the original images.
arXiv Detail & Related papers (2021-09-13T09:30:10Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Real Time Face Recognition Using Convoluted Neural Networks [0.0]
Convolutional Neural Networks are proved to be best for facial recognition.
The creation of dataset is done by converting face videos of the persons to be recognized into hundreds of images of person.
arXiv Detail & Related papers (2020-10-09T12:04:49Z) - FaR-GAN for One-Shot Face Reenactment [20.894596219099164]
We present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input.
The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background.
arXiv Detail & Related papers (2020-05-13T16:15:37Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.