SelfMAD: Enhancing Generalization and Robustness in Morphing Attack Detection via Self-Supervised Learning
- URL: http://arxiv.org/abs/2504.05504v1
- Date: Mon, 07 Apr 2025 21:03:00 GMT
- Title: SelfMAD: Enhancing Generalization and Robustness in Morphing Attack Detection via Self-Supervised Learning
- Authors: Marija Ivanovska, Leon Todorov, Naser Damer, Deepak Kumar Jain, Peter Peer, Vitomir Štruc,
- Abstract summary: SelfMAD is a novel self-supervised approach that simulates general morphing attack artifacts.<n>We demonstrate that SelfMAD significantly outperforms current state-of-the-art MADs.
- Score: 8.554461485466936
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the continuous advancement of generative models, face morphing attacks have become a significant challenge for existing face verification systems due to their potential use in identity fraud and other malicious activities. Contemporary Morphing Attack Detection (MAD) approaches frequently rely on supervised, discriminative models trained on examples of bona fide and morphed images. These models typically perform well with morphs generated with techniques seen during training, but often lead to sub-optimal performance when subjected to novel unseen morphing techniques. While unsupervised models have been shown to perform better in terms of generalizability, they typically result in higher error rates, as they struggle to effectively capture features of subtle artifacts. To address these shortcomings, we present SelfMAD, a novel self-supervised approach that simulates general morphing attack artifacts, allowing classifiers to learn generic and robust decision boundaries without overfitting to the specific artifacts induced by particular face morphing methods. Through extensive experiments on widely used datasets, we demonstrate that SelfMAD significantly outperforms current state-of-the-art MADs, reducing the detection error by more than 64% in terms of EER when compared to the strongest unsupervised competitor, and by more than 66%, when compared to the best performing discriminative MAD model, tested in cross-morph settings. The source code for SelfMAD is available at https://github.com/LeonTodorov/SelfMAD.
Related papers
- One-for-More: Continual Diffusion Model for Anomaly Detection [61.12622458367425]
Anomaly detection methods utilize diffusion models to generate or reconstruct normal samples when given arbitrary anomaly images.<n>Our study found that the diffusion model suffers from severe faithfulness hallucination'' and catastrophic forgetting''<n>We propose a continual diffusion model that uses gradient projection to achieve stable continual learning.
arXiv Detail & Related papers (2025-02-27T07:47:27Z) - Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection [58.87142367781417]
A naively trained detector tends to favor overfitting to the limited and monotonous fake patterns, causing the feature space to become highly constrained and low-ranked.<n>One potential remedy is incorporating the pre-trained knowledge within the vision foundation models to expand the feature space.<n>By freezing the principal components and adapting only the remained components, we preserve the pre-trained knowledge while learning forgery-related patterns.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - Evaluating the Effectiveness of Attack-Agnostic Features for Morphing Attack Detection [20.67964977754179]
We investigate the potential of image representations for morphing attack detection (MAD)
We develop supervised detectors by training a simple binary linear SVM on the extracted features and one-class detectors by modeling the distribution of bonafide features with a Gaussian Mixture Model (GMM)
Our results indicate that attack-agnostic features can effectively detect morphing attacks, outperforming traditional supervised and one-class detectors from the literature in most scenarios.
arXiv Detail & Related papers (2024-10-22T08:27:43Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - DMAD: Dual Memory Bank for Real-World Anomaly Detection [90.97573828481832]
We propose a new framework named Dual Memory bank enhanced representation learning for Anomaly Detection (DMAD)
DMAD employs a dual memory bank to calculate feature distance and feature attention between normal and abnormal patterns.
We evaluate DMAD on the MVTec-AD and VisA datasets.
arXiv Detail & Related papers (2024-03-19T02:16:32Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - Face Morphing Attack Detection with Denoising Diffusion Probabilistic
Models [0.0]
Morphed face images can be used to impersonate someone's identity for various malicious purposes.
Existing MAD techniques rely on discriminative models that learn from examples of bona fide and morphed images.
We propose a novel, diffusion-based MAD method that learns only from the characteristics of bona fide images.
arXiv Detail & Related papers (2023-06-27T18:19:45Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Fusion-based Few-Shot Morphing Attack Detection and Fingerprinting [37.161842673434705]
Face recognition systems are vulnerable to morphing attacks.
Most existing morphing attack detection methods require a large amount of training data and have only been tested on a few predefined attack models.
We propose to extend MAD from supervised learning to few-shot learning and from binary detection to multiclass fingerprinting.
arXiv Detail & Related papers (2022-10-27T14:46:53Z) - Robust Ensemble Morph Detection with Domain Generalization [23.026167387128933]
We learn a morph detection model with high generalization to a wide range of morphing attacks and high robustness against different adversarial attacks.
To this aim, we develop an ensemble of convolutional neural networks (CNNs) and Transformer models to benefit from their capabilities simultaneously.
Our exhaustive evaluations demonstrate that the proposed robust ensemble model generalizes to several morphing attacks and face datasets.
arXiv Detail & Related papers (2022-09-16T19:00:57Z) - ReGenMorph: Visibly Realistic GAN Generated Face Morphing Attacks by
Attack Re-generation [7.169807933149473]
This work presents the novel morphing pipeline, ReGenMorph, to eliminate the LMA blending artifacts by using a GAN-based generation.
The generated ReGenMorph appearance is compared to recent morphing approaches and evaluated for face recognition vulnerability and attack detectability.
arXiv Detail & Related papers (2021-08-20T11:55:46Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.