Face Morphing Attack Detection with Denoising Diffusion Probabilistic
Models
- URL: http://arxiv.org/abs/2306.15733v1
- Date: Tue, 27 Jun 2023 18:19:45 GMT
- Title: Face Morphing Attack Detection with Denoising Diffusion Probabilistic
Models
- Authors: Marija Ivanovska, Vitomir \v{S}truc
- Abstract summary: Morphed face images can be used to impersonate someone's identity for various malicious purposes.
Existing MAD techniques rely on discriminative models that learn from examples of bona fide and morphed images.
We propose a novel, diffusion-based MAD method that learns only from the characteristics of bona fide images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Morphed face images have recently become a growing concern for existing face
verification systems, as they are relatively easy to generate and can be used
to impersonate someone's identity for various malicious purposes. Efficient
Morphing Attack Detection (MAD) that generalizes well across different morphing
techniques is, therefore, of paramount importance. Existing MAD techniques
predominantly rely on discriminative models that learn from examples of bona
fide and morphed images and, as a result, often exhibit sub-optimal
generalization performance when confronted with unknown types of morphing
attacks. To address this problem, we propose a novel, diffusion-based MAD
method in this paper that learns only from the characteristics of bona fide
images. Various forms of morphing attacks are then detected by our model as
out-of-distribution samples. We perform rigorous experiments over four
different datasets (CASIA-WebFace, FRLL-Morphs, FERET-Morphs and FRGC-Morphs)
and compare the proposed solution to both discriminatively-trained and
once-class MAD models. The experimental results show that our MAD model
achieves highly competitive results on all considered datasets.
Related papers
- LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion [5.602947425285195]
Face morphing attacks pose a severe security threat to face recognition systems.
We present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings.
We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential.
arXiv Detail & Related papers (2024-10-10T14:41:37Z) - MFCLIP: Multi-modal Fine-grained CLIP for Generalizable Diffusion Face Forgery Detection [64.29452783056253]
The rapid development of photo-realistic face generation methods has raised significant concerns in society and academia.
Although existing approaches mainly capture face forgery patterns using image modality, other modalities like fine-grained noises and texts are not fully explored.
We propose a novel multi-modal fine-grained CLIP (MFCLIP) model, which mines comprehensive and fine-grained forgery traces across image-noise modalities.
arXiv Detail & Related papers (2024-09-15T13:08:59Z) - Approximating Optimal Morphing Attacks using Template Inversion [4.0361765428523135]
We develop a novel type ofdeep morphing attack based on inverting a theoretical optimal morph embedding.
We generate morphing attacks from several source datasets and study the effectiveness of those attacks against several face recognition networks.
arXiv Detail & Related papers (2024-02-01T15:51:46Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - The Journey, Not the Destination: How Data Guides Diffusion Models [75.19694584942623]
Diffusion models trained on large datasets can synthesize photo-realistic images of remarkable quality and diversity.
We propose a framework that: (i) provides a formal notion of data attribution in the context of diffusion models, and (ii) allows us to counterfactually validate such attributions.
arXiv Detail & Related papers (2023-12-11T08:39:43Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Face Morphing Attacks and Face Image Quality: The Effect of Morphing and
the Unsupervised Attack Detection by Quality [6.889667606945215]
We theorize that the morphing processes might have an effect on both, the perceptual image quality and the image utility in face recognition.
This work provides an extensive analysis of the effect of morphing on face image quality, including both general image quality measures and face image utility measures.
Our study goes further to build on this effect and investigate the possibility of performing unsupervised morphing attack detection (MAD) based on quality scores.
arXiv Detail & Related papers (2022-08-11T15:12:50Z) - ReGenMorph: Visibly Realistic GAN Generated Face Morphing Attacks by
Attack Re-generation [7.169807933149473]
This work presents the novel morphing pipeline, ReGenMorph, to eliminate the LMA blending artifacts by using a GAN-based generation.
The generated ReGenMorph appearance is compared to recent morphing approaches and evaluated for face recognition vulnerability and attack detectability.
arXiv Detail & Related papers (2021-08-20T11:55:46Z) - MIPGAN -- Generating Strong and High Quality Morphing Attacks Using
Identity Prior Driven GAN [22.220940043294334]
We present a new approach for generating strong attacks using an Identity Prior Driven Generative Adversarial Network.
The proposed MIPGAN is derived from the StyleGAN with a newly formulated loss function exploiting perceptual quality and identity factor.
We demonstrate the proposed approach's applicability to generate strong morphing attacks by evaluating its vulnerability against both commercial and deep learning based Face Recognition System.
arXiv Detail & Related papers (2020-09-03T15:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.