Explainable Face Recognition
- URL: http://arxiv.org/abs/2008.00916v1
- Date: Mon, 3 Aug 2020 14:47:51 GMT
- Title: Explainable Face Recognition
- Authors: Jonathan R. Williford, Brandon B. May, Jeffrey Byrne
- Abstract summary: In this paper, we provide the first comprehensive benchmark and baseline evaluation for explainable face recognition.
We define a new evaluation protocol called the inpainting game'', which is a curated set of 3648 triplets (probe, mate, nonmate) of 95 subjects.
An explainable face matcher is tasked with generating a network attention map which best explains which regions in a probe image match with a mated image.
- Score: 4.358626952482686
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Explainable face recognition is the problem of explaining why a facial
matcher matches faces. In this paper, we provide the first comprehensive
benchmark and baseline evaluation for explainable face recognition. We define a
new evaluation protocol called the ``inpainting game'', which is a curated set
of 3648 triplets (probe, mate, nonmate) of 95 subjects, which differ by
synthetically inpainting a chosen facial characteristic like the nose, eyebrows
or mouth creating an inpainted nonmate. An explainable face matcher is tasked
with generating a network attention map which best explains which regions in a
probe image match with a mated image, and not with an inpainted nonmate for
each triplet. This provides ground truth for quantifying what image regions
contribute to face matching. Furthermore, we provide a comprehensive benchmark
on this dataset comparing five state of the art methods for network attention
in face recognition on three facial matchers. This benchmark includes two new
algorithms for network attention called subtree EBP and Density-based Input
Sampling for Explanation (DISE) which outperform the state of the art by a wide
margin. Finally, we show qualitative visualization of these network attention
techniques on novel images, and explore how these explainable face recognition
models can improve transparency and trust for facial matchers.
Related papers
- Self-Supervised Facial Representation Learning with Facial Region
Awareness [13.06996608324306]
Self-supervised pre-training has been proven to be effective in learning transferable representations that benefit various visual tasks.
Recent efforts toward this goal are limited to treating each face image as a whole.
We propose a novel self-supervised facial representation learning framework to learn consistent global and local facial representations.
arXiv Detail & Related papers (2024-03-04T15:48:56Z) - Face identification by means of a neural net classifier [0.0]
We present a novel face identification method that combines the eigenfaces theory with the Neural Nets.
A recognition rate of more than 87% has been achieved, while the classical method of Turk and Pentland achieves a 75.5%.
arXiv Detail & Related papers (2022-04-01T09:30:28Z) - Diverse facial inpainting guided by exemplars [8.360536784609309]
This paper introduces EXE-GAN, a novel diverse and interactive facial inpainting framework.
The proposed facial inpainting is achieved based on generative adversarial networks by leveraging the global style of input image, the style, and exemplar style of image.
A variety of experimental results and comparisons on public CelebA-HQ and FFHQ datasets are presented to demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2022-02-13T16:29:45Z) - FT-TDR: Frequency-guided Transformer and Top-Down Refinement Network for
Blind Face Inpainting [77.78305705925376]
Blind face inpainting refers to the task of reconstructing visual contents without explicitly indicating the corrupted regions in a face image.
We propose a novel two-stage blind face inpainting method named Frequency-guided Transformer and Top-Down Refinement Network (FT-TDR) to tackle these challenges.
arXiv Detail & Related papers (2021-08-10T03:12:01Z) - Foreground-guided Facial Inpainting with Fidelity Preservation [7.5089719291325325]
We propose a foreground-guided facial inpainting framework that can extract and generate facial features using convolutional neural network layers.
Specifically, we propose a new loss function with semantic capability reasoning of facial expressions, natural and unnatural features (make-up)
Our proposed method achieved comparable quantitative results when compare to the state of the art but qualitatively, it demonstrated high-fidelity preservation of facial components.
arXiv Detail & Related papers (2021-05-07T15:50:58Z) - Towards NIR-VIS Masked Face Recognition [47.00916333095693]
Near-infrared to visible (NIR-VIS) face recognition is the most common case in heterogeneous face recognition.
We propose a novel training method to maximize the mutual information shared by the face representation of two domains.
In addition, a 3D face reconstruction based approach is employed to synthesize masked face from the existing NIR image.
arXiv Detail & Related papers (2021-04-14T10:40:09Z) - Face Forgery Detection by 3D Decomposition [72.22610063489248]
We consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment.
By disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture.
We propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns.
arXiv Detail & Related papers (2020-11-19T09:25:44Z) - The Elements of End-to-end Deep Face Recognition: A Survey of Recent
Advances [56.432660252331495]
Face recognition is one of the most popular and long-standing topics in computer vision.
Deep face recognition has made remarkable progress and been widely used in many real-world applications.
In this survey article, we present a comprehensive review about the recent advance of each element.
arXiv Detail & Related papers (2020-09-28T13:02:17Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - Wish You Were Here: Context-Aware Human Generation [100.51309746913512]
We present a novel method for inserting objects, specifically humans, into existing images.
Our method involves threeworks: the first generates the semantic map of the new person, given the pose of the other persons in the scene.
The second network renders the pixels of the novel person and its blending mask, based on specifications in the form of multiple appearance components.
A third network refines the generated face in order to match those of the target person.
arXiv Detail & Related papers (2020-05-21T14:09:14Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.