Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face
Morphing Detection
- URL: http://arxiv.org/abs/2306.03002v1
- Date: Mon, 5 Jun 2023 16:11:19 GMT
- Title: Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face
Morphing Detection
- Authors: Eduarda Caldeira, Pedro C. Neto, Tiago Gon\c{c}alves, Naser Damer, Ana
F. Sequeira, Jaime S. Cardoso
- Abstract summary: Morphing attacks keep threatening biometric systems, especially face recognition systems.
There is a constant concern regarding the lack of interpretability of deep learning models.
We have developed IDistill, an interpretable method with state-of-the-art performance.
- Score: 6.433739188170069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Morphing attacks keep threatening biometric systems, especially face
recognition systems. Over time they have become simpler to perform and more
realistic, as such, the usage of deep learning systems to detect these attacks
has grown. At the same time, there is a constant concern regarding the lack of
interpretability of deep learning models. Balancing performance and
interpretability has been a difficult task for scientists. However, by
leveraging domain information and proving some constraints, we have been able
to develop IDistill, an interpretable method with state-of-the-art performance
that provides information on both the identity separation on morph samples and
their contribution to the final prediction. The domain information is learnt by
an autoencoder and distilled to a classifier system in order to teach it to
separate identity information. When compared to other methods in the literature
it outperforms them in three out of five databases and is competitive in the
remaining.
Related papers
- TetraLoss: Improving the Robustness of Face Recognition against Morphing
Attacks [7.092869001331781]
Face recognition systems are widely deployed in high-security applications.
Digital manipulations, such as face morphing, pose a security threat to face recognition systems.
We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
arXiv Detail & Related papers (2024-01-21T21:04:05Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Deep Learning-based Spatio Temporal Facial Feature Visual Speech
Recognition [0.0]
We present an alternate authentication process that makes use of both facial recognition and the individual's distinctive temporal facial feature motions while they speak a password.
The suggested model attained an accuracy of 96.1% when tested on the industry-standard MIRACL-VC1 dataset.
arXiv Detail & Related papers (2023-04-30T18:52:29Z) - Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
Medicine [5.126042819606137]
We focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.
Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.
Federated learning enables learning large-scale models without exposing sensitive personal health information.
arXiv Detail & Related papers (2022-11-17T03:32:00Z) - OrthoMAD: Morphing Attack Detection Through Orthogonal Identity
Disentanglement [6.433739188170069]
We propose a novel regularisation term that takes into account the existent identity information in both and promotes the creation of two latent vectors.
We evaluate our proposed method in five different types of morphing in the FRLL dataset and evaluate the performance of our model when trained on five distinct datasets.
arXiv Detail & Related papers (2022-08-16T16:55:12Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z) - Privacy-Preserving Eye-tracking Using Deep Learning [1.5484595752241124]
In this study, we focus on the case of a deep network model trained on images of individual faces.
In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence.
arXiv Detail & Related papers (2021-06-17T15:58:01Z) - Conditional Contrastive Learning: Removing Undesirable Information in
Self-Supervised Representations [108.29288034509305]
We develop conditional contrastive learning to remove undesirable information in self-supervised representations.
We demonstrate empirically that our methods can successfully learn self-supervised representations for downstream tasks.
arXiv Detail & Related papers (2021-06-05T10:51:26Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.