LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion
- URL: http://arxiv.org/abs/2410.07988v1
- Date: Thu, 10 Oct 2024 14:41:37 GMT
- Title: LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion
- Authors: Marcel Grimmer, Christoph Busch,
- Abstract summary: Face morphing attacks pose a severe security threat to face recognition systems.
We present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings.
We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential.
- Score: 5.602947425285195
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Face morphing attacks pose a severe security threat to face recognition systems, enabling the morphed face image to be verified against multiple identities. To detect such manipulated images, the development of new face morphing methods becomes essential to increase the diversity of training datasets used for face morph detection. In this study, we present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings. Specifically, we train a Latent Diffusion Model to invert a biometric template - thus reconstructing the face image from an FRS latent representation. Our subsequent vulnerability analysis demonstrates the high morph attack potential in comparison to MIPGAN-II, an established GAN-based face morphing approach. Finally, we exploit the stochastic LADMIO model design in combination with our identity conditioning mechanism to create unlimited morphing attacks from a single face morph image pair. We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential by applying a simple re-sampling strategy. Code and pre-trained models available here: https://github.com/dasec/LADIMO
Related papers
- Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - Arc2Face: A Foundation Model for ID-Consistent Human Faces [95.00331107591859]
Arc2Face is an identity-conditioned face foundation model.
It can generate diverse photo-realistic images with an unparalleled degree of face similarity than existing models.
arXiv Detail & Related papers (2024-03-18T10:32:51Z) - Approximating Optimal Morphing Attacks using Template Inversion [4.0361765428523135]
We develop a novel type ofdeep morphing attack based on inverting a theoretical optimal morph embedding.
We generate morphing attacks from several source datasets and study the effectiveness of those attacks against several face recognition networks.
arXiv Detail & Related papers (2024-02-01T15:51:46Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - MorDIFF: Recognition Vulnerability and Attack Detectability of Face
Morphing Attacks Created by Diffusion Autoencoders [10.663919597506055]
Face morphing attacks are created on the image-level or on the representation-level.
Recent advances in the diffusion autoencoder models have overcome the GAN limitations, leading to high reconstruction fidelity.
This work investigates using diffusion autoencoders to create face morphing attacks by comparing them to a wide range of image-level and representation-level morphs.
arXiv Detail & Related papers (2023-02-03T16:37:38Z) - Facial De-morphing: Extracting Component Faces from a Single Morph [12.346914707006773]
morph attack detection strategies can detect morphs but cannot recover the images or identities used in creating them.
We propose a novel de-morphing method that can recover images of both identities simultaneously from a single morphed face image.
arXiv Detail & Related papers (2022-09-07T05:01:02Z) - Are GAN-based Morphs Threatening Face Recognition? [3.0921354926071274]
This paper bridges the gap by providing datasets and the corresponding code for four types of morphing attacks.
We also conduct extensive experiments to assess the vulnerability of four state-of-the-art face recognition systems.
arXiv Detail & Related papers (2022-05-05T08:19:47Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - One Shot Face Swapping on Megapixels [65.47443090320955]
This paper proposes the first Megapixel level method for one shot Face Swapping (or MegaFS for short)
Complete face representation, stable training, and limited memory usage are the three novel contributions to the success of our method.
arXiv Detail & Related papers (2021-05-11T10:41:47Z) - Can GAN Generated Morphs Threaten Face Recognition Systems Equally as
Landmark Based Morphs? -- Vulnerability and Detection [22.220940043294334]
We propose a new framework for generating face morphs using a newer Generative Adversarial Network (GAN) - StyleGAN.
With the newly created morphing dataset of 2500 morphed face images, we pose a critical question in this work.
arXiv Detail & Related papers (2020-07-07T16:52:56Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.