An Efficient Ensemble Explainable AI (XAI) Approach for Morphed Face
Detection
- URL: http://arxiv.org/abs/2304.14509v1
- Date: Sun, 23 Apr 2023 13:43:06 GMT
- Title: An Efficient Ensemble Explainable AI (XAI) Approach for Morphed Face
Detection
- Authors: Rudresh Dwivedi, Ritesh Kumar, Deepak Chopra, Pranay Kothari, Manjot
Singh
- Abstract summary: We present a novel visual explanation approach named Ensemble XAI to provide a more comprehensive visual explanation for a deep learning prognostic model (EfficientNet-Grad1)
The experiments have been performed on three publicly available datasets namely Face Research Lab London Set, Wide Multi-Channel Presentation Attack (WMCA) and Makeup Induced Face Spoofing (MIFS)
- Score: 1.2599533416395763
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The extensive utilization of biometric authentication systems have emanated
attackers / imposters to forge user identity based on morphed images. In this
attack, a synthetic image is produced and merged with genuine. Next, the
resultant image is user for authentication. Numerous deep neural convolutional
architectures have been proposed in literature for face Morphing Attack
Detection (MADs) to prevent such attacks and lessen the risks associated with
them. Although, deep learning models achieved optimal results in terms of
performance, it is difficult to understand and analyse these networks since
they are black box/opaque in nature. As a consequence, incorrect judgments may
be made. There is, however, a dearth of literature that explains
decision-making methods of black box deep learning models for biometric
Presentation Attack Detection (PADs) or MADs that can aid the biometric
community to have trust in deep learning-based biometric systems for
identification and authentication in various security applications such as
border control, criminal database establishment etc. In this work, we present a
novel visual explanation approach named Ensemble XAI integrating Saliency maps,
Class Activation Maps (CAM) and Gradient-CAM (Grad-CAM) to provide a more
comprehensive visual explanation for a deep learning prognostic model
(EfficientNet-B1) that we have employed to predict whether the input presented
to a biometric authentication system is morphed or genuine. The
experimentations have been performed on three publicly available datasets
namely Face Research Lab London Set, Wide Multi-Channel Presentation Attack
(WMCA), and Makeup Induced Face Spoofing (MIFS). The experimental evaluations
affirms that the resultant visual explanations highlight more fine-grained
details of image features/areas focused by EfficientNet-B1 to reach decisions
along with appropriate reasoning.
Related papers
- SHIELD : An Evaluation Benchmark for Face Spoofing and Forgery Detection
with Multimodal Large Language Models [63.946809247201905]
We introduce a new benchmark, namely SHIELD, to evaluate the ability of MLLMs on face spoofing and forgery detection.
We design true/false and multiple-choice questions to evaluate multimodal face data in these two face security tasks.
The results indicate that MLLMs hold substantial potential in the face security domain.
arXiv Detail & Related papers (2024-02-06T17:31:36Z) - Embedding Non-Distortive Cancelable Face Template Generation [22.80706131626207]
We introduce an innovative image distortion technique that makes facial images unrecognizable to the eye but still identifiable by any custom embedding neural network model.
We test the reliability of biometric recognition networks by determining the maximum image distortion that does not change the predicted identity.
arXiv Detail & Related papers (2024-02-04T15:39:18Z) - Presentation Attack detection using Wavelet Transform and Deep Residual
Neural Net [5.425986555749844]
Biometric substances can be deceived by the imposters in several ways.
The bio-metric images, especially the iris and face, are vulnerable to different presentation attacks.
This research applies deep learning approaches to mitigate presentation attacks in a biometric access control system.
arXiv Detail & Related papers (2023-11-23T20:21:49Z) - COMICS: End-to-end Bi-grained Contrastive Learning for Multi-face Forgery Detection [56.7599217711363]
Face forgery recognition methods can only process one face at a time.
Most face forgery recognition methods can only process one face at a time.
We propose COMICS, an end-to-end framework for multi-face forgery detection.
arXiv Detail & Related papers (2023-08-03T03:37:13Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Model-agnostic explainable artificial intelligence for object detection in image data [8.042562891309414]
Black-box explanation method named Black-box Object Detection Explanation by Masking (BODEM)
We propose a hierarchical random masking framework in which coarse-grained masks are used in lower levels to find salient regions within an image.
Experimentations on various object detection datasets and models showed that BODEM can effectively explain the behavior of object detectors.
arXiv Detail & Related papers (2023-03-30T09:29:03Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Generalized Iris Presentation Attack Detection Algorithm under
Cross-Database Settings [63.90855798947425]
Presentation attacks pose major challenges to most of the biometric modalities.
We propose a generalized deep learning-based presentation attack detection network, MVANet.
It is inspired by the simplicity and success of hybrid algorithm or fusion of multiple detection networks.
arXiv Detail & Related papers (2020-10-25T22:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.