Towards Visual Saliency Explanations of Face Verification
- URL: http://arxiv.org/abs/2305.08546v4
- Date: Tue, 24 Oct 2023 16:02:42 GMT
- Title: Towards Visual Saliency Explanations of Face Verification
- Authors: Yuhang Lu, Zewei Xu, Touradj Ebrahimi
- Abstract summary: This paper focuses on explainable face verification tasks using deep convolutional neural networks.
A new model-agnostic explanation method named CorrRISE is proposed to produce saliency maps.
Results show that the proposed CorrRISE method demonstrates promising results in comparison with other state-of-the-art explainable face verification approaches.
- Score: 10.234175295380107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past years, deep convolutional neural networks have been pushing the
frontier of face recognition (FR) techniques in both verification and
identification scenarios. Despite the high accuracy, they are often criticized
for lacking explainability. There has been an increasing demand for
understanding the decision-making process of deep face recognition systems.
Recent studies have investigated the usage of visual saliency maps as an
explanation, but they often lack a discussion and analysis in the context of
face recognition. This paper concentrates on explainable face verification
tasks and conceives a new explanation framework. Firstly, a definition of the
saliency-based explanation method is provided, which focuses on the decisions
made by the deep FR model. Secondly, a new model-agnostic explanation method
named CorrRISE is proposed to produce saliency maps, which reveal both the
similar and dissimilar regions of any given pair of face images. Then, an
evaluation methodology is designed to measure the performance of general visual
saliency explanation methods in face verification. Finally, substantial visual
and quantitative results have shown that the proposed CorrRISE method
demonstrates promising results in comparison with other state-of-the-art
explainable face verification approaches.
Related papers
- Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems [9.105950041800225]
This manuscript conceives a comprehensive explanation framework for face recognition tasks.
An exhaustive definition of visual saliency map-based explanations for AI-based face recognition systems is provided.
A new model-agnostic explanation method named CorrRISE is proposed to produce saliency maps.
arXiv Detail & Related papers (2024-07-08T14:25:46Z) - Explainable Face Verification via Feature-Guided Gradient
Backpropagation [9.105950041800225]
There is a growing need for reliable interpretations of decisions of face recognition systems.
This paper first explores the spatial relationship between face image and its deep representation via gradient backpropagation.
A new explanation approach has been conceived, which provides precise and insightful similarity and dissimilarity saliency maps to explain the "Accept" and "Reject" decision of an FR system.
arXiv Detail & Related papers (2024-03-07T14:43:40Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Explaining Deep Face Algorithms through Visualization: A Survey [57.60696799018538]
This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain.
We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks.
arXiv Detail & Related papers (2023-09-26T07:16:39Z) - Discriminative Deep Feature Visualization for Explainable Face
Recognition [9.105950041800225]
This paper contributes to the problem of explainable face recognition by first conceiving a face reconstruction-based explanation module.
To further interpret the decision of an FR model, a novel visual saliency explanation algorithm has been proposed.
arXiv Detail & Related papers (2023-06-01T07:06:43Z) - Explanation of Face Recognition via Saliency Maps [13.334500258498798]
This paper proposes a rigorous definition of explainable face recognition (XFR)
It then introduces a similarity-based RISE algorithm (S-RISE) to produce high-quality visual saliency maps.
An evaluation approach is proposed to systematically validate the reliability and accuracy of general visual saliency-based XFR methods.
arXiv Detail & Related papers (2023-04-12T19:04:21Z) - Privileged Attribution Constrained Deep Networks for Facial Expression
Recognition [31.98044070620145]
Facial Expression Recognition (FER) is crucial in many research domains because it enables machines to better understand human behaviours.
To alleviate these issues, we guide the model to concentrate on specific facial areas like the eyes, the mouth or the eyebrows.
We propose the Privileged Attribution Loss (PAL), a method that directs the attention of the model towards the most salient facial regions.
arXiv Detail & Related papers (2022-03-24T07:49:33Z) - Detect and Locate: A Face Anti-Manipulation Approach with Semantic and
Noise-level Supervision [67.73180660609844]
We propose a conceptually simple but effective method to efficiently detect forged faces in an image.
The proposed scheme relies on a segmentation map that delivers meaningful high-level semantic information clues about the image.
The proposed model achieves state-of-the-art detection accuracy and remarkable localization performance.
arXiv Detail & Related papers (2021-07-13T02:59:31Z) - Deep Learning-based Face Super-resolution: A Survey [78.11274281686246]
Face super-resolution, also known as face hallucination, is a domain-specific image super-resolution problem.
To date, few summaries of the studies on the deep learning-based face super-resolution are available.
In this survey, we present a comprehensive review of deep learning techniques in face super-resolution in a systematic manner.
arXiv Detail & Related papers (2021-01-11T08:17:11Z) - DeepFake Detection Based on the Discrepancy Between the Face and its
Context [94.47879216590813]
We propose a method for detecting face swapping and other identity manipulations in single images.
Our approach involves two networks: (i) a face identification network that considers the face region bounded by a tight semantic segmentation, and (ii) a context recognition network that considers the face context.
We describe a method which uses the recognition signals from our two networks to detect such discrepancies.
Our method achieves state of the art results on the FaceForensics++, Celeb-DF-v2, and DFDC benchmarks for face manipulation detection, and even generalizes to detect fakes produced by unseen methods.
arXiv Detail & Related papers (2020-08-27T17:04:46Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.