Towards Generalizable Morph Attack Detection with Consistency
Regularization
- URL: http://arxiv.org/abs/2308.10392v1
- Date: Sun, 20 Aug 2023 23:50:22 GMT
- Title: Towards Generalizable Morph Attack Detection with Consistency
Regularization
- Authors: Hossein Kashiani, Niloufar Alipour Talemi, Mohammad Saeed Ebrahimi
Saadabadi, Nasser M. Nasrabadi
- Abstract summary: Generalizable morph attack detection has gained significant attention.
Two simple yet effective morph-wise augmentations are proposed to explore a wide space of realistic morph transformations.
The proposed consistency regularization aligns the abstraction in the hidden layers of our model across the morph attack images.
- Score: 12.129404936688752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though recent studies have made significant progress in morph attack
detection by virtue of deep neural networks, they often fail to generalize well
to unseen morph attacks. With numerous morph attacks emerging frequently,
generalizable morph attack detection has gained significant attention. This
paper focuses on enhancing the generalization capability of morph attack
detection from the perspective of consistency regularization. Consistency
regularization operates under the premise that generalizable morph attack
detection should output consistent predictions irrespective of the possible
variations that may occur in the input space. In this work, to reach this
objective, two simple yet effective morph-wise augmentations are proposed to
explore a wide space of realistic morph transformations in our consistency
regularization. Then, the model is regularized to learn consistently at the
logit as well as embedding levels across a wide range of morph-wise augmented
images. The proposed consistency regularization aligns the abstraction in the
hidden layers of our model across the morph attack images which are generated
from diverse domains in the wild. Experimental results demonstrate the superior
generalization and robustness performance of our proposed method compared to
the state-of-the-art studies.
Related papers
- Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - The Impact of Print-Scanning in Heterogeneous Morph Evaluation Scenarios [1.9035583634286277]
We investigate the impact of print-scanning on morphing attack detection through a series of evaluations.
Experiments show that we can increase the Mated Morph Presentation Match Rate (MMPMR) by up to 8.48%.
When a Single-image Morphing Attack Detection (S-MAD) algorithm is not trained to detect print-scanned morphs the Morphing Attack Classification Error Rate (MACER) can increase by up to 96.12%.
arXiv Detail & Related papers (2024-04-09T18:23:34Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Approximating Optimal Morphing Attacks using Template Inversion [4.0361765428523135]
We develop a novel type ofdeep morphing attack based on inverting a theoretical optimal morph embedding.
We generate morphing attacks from several source datasets and study the effectiveness of those attacks against several face recognition networks.
arXiv Detail & Related papers (2024-02-01T15:51:46Z) - Face Morphing Attack Detection with Denoising Diffusion Probabilistic
Models [0.0]
Morphed face images can be used to impersonate someone's identity for various malicious purposes.
Existing MAD techniques rely on discriminative models that learn from examples of bona fide and morphed images.
We propose a novel, diffusion-based MAD method that learns only from the characteristics of bona fide images.
arXiv Detail & Related papers (2023-06-27T18:19:45Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - MorDIFF: Recognition Vulnerability and Attack Detectability of Face
Morphing Attacks Created by Diffusion Autoencoders [10.663919597506055]
Face morphing attacks are created on the image-level or on the representation-level.
Recent advances in the diffusion autoencoder models have overcome the GAN limitations, leading to high reconstruction fidelity.
This work investigates using diffusion autoencoders to create face morphing attacks by comparing them to a wide range of image-level and representation-level morphs.
arXiv Detail & Related papers (2023-02-03T16:37:38Z) - Robust Ensemble Morph Detection with Domain Generalization [23.026167387128933]
We learn a morph detection model with high generalization to a wide range of morphing attacks and high robustness against different adversarial attacks.
To this aim, we develop an ensemble of convolutional neural networks (CNNs) and Transformer models to benefit from their capabilities simultaneously.
Our exhaustive evaluations demonstrate that the proposed robust ensemble model generalizes to several morphing attacks and face datasets.
arXiv Detail & Related papers (2022-09-16T19:00:57Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.