Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems
- URL: http://arxiv.org/abs/2407.05983v1
- Date: Mon, 8 Jul 2024 14:25:46 GMT
- Title: Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems
- Authors: Yuhang Lu, Zewei Xu, Touradj Ebrahimi,
- Abstract summary: This manuscript conceives a comprehensive explanation framework for face recognition tasks.
An exhaustive definition of visual saliency map-based explanations for AI-based face recognition systems is provided.
A new model-agnostic explanation method named CorrRISE is proposed to produce saliency maps.
- Score: 9.105950041800225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over recent years, deep convolutional neural networks have significantly advanced the field of face recognition techniques for both verification and identification purposes. Despite the impressive accuracy, these neural networks are often criticized for lacking explainability. There is a growing demand for understanding the decision-making process of AI-based face recognition systems. Some studies have investigated the use of visual saliency maps as explanations, but they have predominantly focused on the specific face verification case. The discussion on more general face recognition scenarios and the corresponding evaluation methodology for these explanations have long been absent in current research. Therefore, this manuscript conceives a comprehensive explanation framework for face recognition tasks. Firstly, an exhaustive definition of visual saliency map-based explanations for AI-based face recognition systems is provided, taking into account the two most common recognition situations individually, i.e., face verification and identification. Secondly, a new model-agnostic explanation method named CorrRISE is proposed to produce saliency maps, which reveal both the similar and dissimilar regions between any given face images. Subsequently, the explanation framework conceives a new evaluation methodology that offers quantitative measurement and comparison of the performance of general visual saliency explanation methods in face recognition. Consequently, extensive experiments are carried out on multiple verification and identification scenarios. The results showcase that CorrRISE generates insightful saliency maps and demonstrates superior performance, particularly in similarity maps in comparison with the state-of-the-art explanation approaches.
Related papers
- Explainable Face Verification via Feature-Guided Gradient
Backpropagation [9.105950041800225]
There is a growing need for reliable interpretations of decisions of face recognition systems.
This paper first explores the spatial relationship between face image and its deep representation via gradient backpropagation.
A new explanation approach has been conceived, which provides precise and insightful similarity and dissimilarity saliency maps to explain the "Accept" and "Reject" decision of an FR system.
arXiv Detail & Related papers (2024-03-07T14:43:40Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Explaining Deep Face Algorithms through Visualization: A Survey [57.60696799018538]
This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain.
We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks.
arXiv Detail & Related papers (2023-09-26T07:16:39Z) - Discriminative Deep Feature Visualization for Explainable Face
Recognition [9.105950041800225]
This paper contributes to the problem of explainable face recognition by first conceiving a face reconstruction-based explanation module.
To further interpret the decision of an FR model, a novel visual saliency explanation algorithm has been proposed.
arXiv Detail & Related papers (2023-06-01T07:06:43Z) - Towards Visual Saliency Explanations of Face Verification [10.234175295380107]
This paper focuses on explainable face verification tasks using deep convolutional neural networks.
A new model-agnostic explanation method named CorrRISE is proposed to produce saliency maps.
Results show that the proposed CorrRISE method demonstrates promising results in comparison with other state-of-the-art explainable face verification approaches.
arXiv Detail & Related papers (2023-05-15T11:17:17Z) - Analysis of Recent Trends in Face Recognition Systems [0.0]
Due to inter-class similarities and intra-class variations, face recognition systems generate false match and false non-match errors respectively.
Recent research focuses on improving the robustness of extracted features and the pre-processing algorithms to enhance recognition accuracy.
arXiv Detail & Related papers (2023-04-23T18:55:45Z) - Explanation of Face Recognition via Saliency Maps [13.334500258498798]
This paper proposes a rigorous definition of explainable face recognition (XFR)
It then introduces a similarity-based RISE algorithm (S-RISE) to produce high-quality visual saliency maps.
An evaluation approach is proposed to systematically validate the reliability and accuracy of general visual saliency-based XFR methods.
arXiv Detail & Related papers (2023-04-12T19:04:21Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.