The Impact of Print-Scanning in Heterogeneous Morph Evaluation Scenarios
- URL: http://arxiv.org/abs/2404.06559v2
- Date: Tue, 3 Sep 2024 01:57:04 GMT
- Title: The Impact of Print-Scanning in Heterogeneous Morph Evaluation Scenarios
- Authors: Richard E. Neddo, Zander W. Blasingame, Chen Liu,
- Abstract summary: We investigate the impact of print-scanning on morphing attack detection through a series of evaluations.
Experiments show that we can increase the Mated Morph Presentation Match Rate (MMPMR) by up to 8.48%.
When a Single-image Morphing Attack Detection (S-MAD) algorithm is not trained to detect print-scanned morphs the Morphing Attack Classification Error Rate (MACER) can increase by up to 96.12%.
- Score: 1.9035583634286277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face morphing attacks pose an increasing threat to face recognition (FR) systems. A morphed photo contains biometric information from two different subjects to take advantage of vulnerabilities in FRs. These systems are particularly susceptible to attacks when the morphs are subjected to print-scanning to mask the artifacts generated during the morphing process. We investigate the impact of print-scanning on morphing attack detection through a series of evaluations on heterogeneous morphing attack scenarios. Our experiments show that we can increase the Mated Morph Presentation Match Rate (MMPMR) by up to 8.48%. Furthermore, when a Single-image Morphing Attack Detection (S-MAD) algorithm is not trained to detect print-scanned morphs the Morphing Attack Classification Error Rate (MACER) can increase by up to 96.12%, indicating significant vulnerability.
Related papers
- LADIMO: Face Morph Generation through Biometric Template Inversion with Latent Diffusion [5.602947425285195]
Face morphing attacks pose a severe security threat to face recognition systems.
We present a representation-level face morphing approach, namely LADIMO, that performs morphing on two face recognition embeddings.
We show that each face morph variant has an individual attack success rate, enabling us to maximize the morph attack potential.
arXiv Detail & Related papers (2024-10-10T14:41:37Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Face Morphing Attack Detection with Denoising Diffusion Probabilistic
Models [0.0]
Morphed face images can be used to impersonate someone's identity for various malicious purposes.
Existing MAD techniques rely on discriminative models that learn from examples of bona fide and morphed images.
We propose a novel, diffusion-based MAD method that learns only from the characteristics of bona fide images.
arXiv Detail & Related papers (2023-06-27T18:19:45Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - Leveraging Diffusion For Strong and High Quality Face Morphing Attacks [2.0795007613453445]
Face morphing attacks seek to deceive a Face Recognition (FR) system by presenting a morphed image consisting of the biometric qualities from two different identities.
We present a novel morphing attack that uses a Diffusion-based architecture to improve the visual fidelity of the image.
arXiv Detail & Related papers (2023-01-10T21:50:26Z) - Face Morphing Attacks and Face Image Quality: The Effect of Morphing and
the Unsupervised Attack Detection by Quality [6.889667606945215]
We theorize that the morphing processes might have an effect on both, the perceptual image quality and the image utility in face recognition.
This work provides an extensive analysis of the effect of morphing on face image quality, including both general image quality measures and face image utility measures.
Our study goes further to build on this effect and investigate the possibility of performing unsupervised morphing attack detection (MAD) based on quality scores.
arXiv Detail & Related papers (2022-08-11T15:12:50Z) - FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis [82.2511780233828]
We propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various medical image analysis tasks.
Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images.
arXiv Detail & Related papers (2021-12-02T11:52:17Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - On the Influence of Ageing on Face Morph Attacks: Vulnerability and
Detection [12.936155415524937]
Face Recognition Systems (FRS) are widely deployed in border control applications.
The face morphing process uses the images from multiple data subjects and performs an image blending operation to generate a morphed image of high quality.
The generated morphed image exhibits similar visual characteristics corresponding to the biometric characteristics of the data subjects that contributed to the composite image.
arXiv Detail & Related papers (2020-07-06T12:32:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.