Robust Ensemble Morph Detection with Domain Generalization
- URL: http://arxiv.org/abs/2209.08130v1
- Date: Fri, 16 Sep 2022 19:00:57 GMT
- Title: Robust Ensemble Morph Detection with Domain Generalization
- Authors: Hossein Kashiani, Shoaib Meraj Sami, Sobhan Soleymani, Nasser M.
Nasrabadi
- Abstract summary: We learn a morph detection model with high generalization to a wide range of morphing attacks and high robustness against different adversarial attacks.
To this aim, we develop an ensemble of convolutional neural networks (CNNs) and Transformer models to benefit from their capabilities simultaneously.
Our exhaustive evaluations demonstrate that the proposed robust ensemble model generalizes to several morphing attacks and face datasets.
- Score: 23.026167387128933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although a substantial amount of studies is dedicated to morph detection,
most of them fail to generalize for morph faces outside of their training
paradigm. Moreover, recent morph detection methods are highly vulnerable to
adversarial attacks. In this paper, we intend to learn a morph detection model
with high generalization to a wide range of morphing attacks and high
robustness against different adversarial attacks. To this aim, we develop an
ensemble of convolutional neural networks (CNNs) and Transformer models to
benefit from their capabilities simultaneously. To improve the robust accuracy
of the ensemble model, we employ multi-perturbation adversarial training and
generate adversarial examples with high transferability for several single
models. Our exhaustive evaluations demonstrate that the proposed robust
ensemble model generalizes to several morphing attacks and face datasets. In
addition, we validate that our robust ensemble model gain better robustness
against several adversarial attacks while outperforming the state-of-the-art
studies.
Related papers
- Evaluating the Effectiveness of Attack-Agnostic Features for Morphing Attack Detection [20.67964977754179]
We investigate the potential of image representations for morphing attack detection (MAD)
We develop supervised detectors by training a simple binary linear SVM on the extracted features and one-class detectors by modeling the distribution of bonafide features with a Gaussian Mixture Model (GMM)
Our results indicate that attack-agnostic features can effectively detect morphing attacks, outperforming traditional supervised and one-class detectors from the literature in most scenarios.
arXiv Detail & Related papers (2024-10-22T08:27:43Z) - On Evaluating Adversarial Robustness of Volumetric Medical Segmentation Models [59.45628259925441]
Volumetric medical segmentation models have achieved significant success on organ and tumor-based segmentation tasks.
Their vulnerability to adversarial attacks remains largely unexplored.
This underscores the importance of investigating the robustness of existing models.
arXiv Detail & Related papers (2024-06-12T17:59:42Z) - Approximating Optimal Morphing Attacks using Template Inversion [4.0361765428523135]
We develop a novel type ofdeep morphing attack based on inverting a theoretical optimal morph embedding.
We generate morphing attacks from several source datasets and study the effectiveness of those attacks against several face recognition networks.
arXiv Detail & Related papers (2024-02-01T15:51:46Z) - Towards Generalizable Morph Attack Detection with Consistency
Regularization [12.129404936688752]
Generalizable morph attack detection has gained significant attention.
Two simple yet effective morph-wise augmentations are proposed to explore a wide space of realistic morph transformations.
The proposed consistency regularization aligns the abstraction in the hidden layers of our model across the morph attack images.
arXiv Detail & Related papers (2023-08-20T23:50:22Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Robustness and Generalization via Generative Adversarial Training [21.946687274313177]
We present Generative Adversarial Training, an approach to simultaneously improve the model's generalization to the test set and out-of-domain samples.
We show that our approach not only improves performance of the model on clean images and out-of-domain samples but also makes it robust against unforeseen attacks.
arXiv Detail & Related papers (2021-09-06T22:34:04Z) - "What's in the box?!": Deflecting Adversarial Attacks by Randomly
Deploying Adversarially-Disjoint Models [71.91835408379602]
adversarial examples have been long considered a real threat to machine learning models.
We propose an alternative deployment-based defense paradigm that goes beyond the traditional white-box and black-box threat models.
arXiv Detail & Related papers (2021-02-09T20:07:13Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Voting based ensemble improves robustness of defensive models [82.70303474487105]
We study whether it is possible to create an ensemble to further improve robustness.
By ensembling several state-of-the-art pre-trained defense models, our method can achieve a 59.8% robust accuracy.
arXiv Detail & Related papers (2020-11-28T00:08:45Z) - Certifying Joint Adversarial Robustness for Model Ensembles [10.203602318836445]
Deep Neural Networks (DNNs) are often vulnerable to adversarial examples.
A proposed defense deploys an ensemble of models with the hope that, although the individual models may be vulnerable, an adversary will not be able to find an adversarial example that succeeds against the ensemble.
We consider the joint vulnerability of an ensemble of models, and propose a novel technique for certifying the joint robustness of ensembles.
arXiv Detail & Related papers (2020-04-21T19:38:31Z) - Regularizers for Single-step Adversarial Training [49.65499307547198]
We propose three types of regularizers that help to learn robust models using single-step adversarial training methods.
Regularizers mitigate the effect of gradient masking by harnessing on properties that differentiate a robust model from that of a pseudo robust model.
arXiv Detail & Related papers (2020-02-03T09:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.