Face Attributes as Cues for Deep Face Recognition Understanding
- URL: http://arxiv.org/abs/2105.07054v1
- Date: Fri, 14 May 2021 19:54:24 GMT
- Title: Face Attributes as Cues for Deep Face Recognition Understanding
- Authors: Matheus Alves Diniz and William Robson Schwartz
- Abstract summary: We use hidden layers to predict face attributes using a variable selection technique.
Gender, eyeglasses and hat usage can be predicted with over 96% accuracy even when only a single neural output is used to predict each attribute.
Our experiments show that, inside DCNNs optimized for face identification, there exists latent neurons encoding face attributes almost as accurately as DCNNs optimized for these attributes.
- Score: 4.132205118175555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deeply learned representations are the state-of-the-art descriptors for face
recognition methods. These representations encode latent features that are
difficult to explain, compromising the confidence and interpretability of their
predictions. Most attempts to explain deep features are visualization
techniques that are often open to interpretation. Instead of relying only on
visualizations, we use the outputs of hidden layers to predict face attributes.
The obtained performance is an indicator of how well the attribute is
implicitly learned in that layer of the network. Using a variable selection
technique, we also analyze how these semantic concepts are distributed inside
each layer, establishing the precise location of relevant neurons for each
attribute. According to our experiments, gender, eyeglasses and hat usage can
be predicted with over 96% accuracy even when only a single neural output is
used to predict each attribute. These performances are less than 3 percentage
points lower than the ones achieved by deep supervised face attribute networks.
In summary, our experiments show that, inside DCNNs optimized for face
identification, there exists latent neurons encoding face attributes almost as
accurately as DCNNs optimized for these attributes.
Related papers
- CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - AGA-GAN: Attribute Guided Attention Generative Adversarial Network with
U-Net for Face Hallucination [15.010153819096056]
We propose an Attribute Guided Attention Generative Adversarial Network which employs attribute guided attention (AGA) modules to identify and focus the generation process on various facial features in the image.
AGA-GAN and AGA-GAN+U-Net framework outperforms several other cutting-edge face hallucination state-of-the-art methods.
arXiv Detail & Related papers (2021-11-20T13:43:03Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - PASS: Protected Attribute Suppression System for Mitigating Bias in Face
Recognition [55.858374644761525]
Face recognition networks encode information about sensitive attributes while being trained for identity classification.
Existing bias mitigation approaches require end-to-end training and are unable to achieve high verification accuracy.
We present a descriptors-based adversarial de-biasing approach called Protected Attribute Suppression System ( PASS)'
Pass can be trained on top of descriptors obtained from any previously trained high-performing network to classify identities and simultaneously reduce encoding of sensitive attributes.
arXiv Detail & Related papers (2021-08-09T00:39:22Z) - Progressive Spatio-Temporal Bilinear Network with Monte Carlo Dropout
for Landmark-based Facial Expression Recognition with Uncertainty Estimation [93.73198973454944]
The performance of our method is evaluated on three widely used datasets.
It is comparable to that of video-based state-of-the-art methods while it has much less complexity.
arXiv Detail & Related papers (2021-06-08T13:40:30Z) - Facial expression and attributes recognition based on multi-task
learning of lightweight neural networks [9.162936410696409]
We examine the multi-task training of lightweight convolutional neural networks for face identification and classification of facial attributes.
It is shown that it is still necessary to fine-tune these networks in order to predict facial expressions.
Several models are presented based on MobileNet, EfficientNet and RexNet architectures.
arXiv Detail & Related papers (2021-03-31T14:21:04Z) - Attribution Mask: Filtering Out Irrelevant Features By Recursively
Focusing Attention on Inputs of DNNs [13.960152426268769]
Attribution methods calculate attributions that visually explain the predictions of deep neural networks (DNNs) by highlighting important parts of the input features.
In this study, we use the attributions that filter out irrelevant parts of the input features and then verify the effectiveness of this approach by measuring the classification accuracy of a pre-trained DNN.
arXiv Detail & Related papers (2021-02-15T04:12:04Z) - Implicit Saliency in Deep Neural Networks [15.510581400494207]
In this paper, we show that existing recognition and localization deep architectures are capable of predicting the human visual saliency.
We calculate this implicit saliency using expectancy-mismatch hypothesis in an unsupervised fashion.
Our experiments show that extracting saliency in this fashion provides comparable performance when measured against the state-of-art supervised algorithms.
arXiv Detail & Related papers (2020-08-04T23:14:24Z) - CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language
Learning [78.3857991931479]
We present GROLLA, an evaluation framework for Grounded Language Learning with Attributes.
We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations.
arXiv Detail & Related papers (2020-06-03T11:21:42Z) - Verifying Deep Learning-based Decisions for Facial Expression
Recognition [0.8137198664755597]
We classify facial expressions with a neural network and create pixel-based explanations.
We quantify these visual explanations based on a bounding-box method with respect to facial regions.
Although our results show that the neural network achieves state-of-the-art results, the evaluation of the visual explanations reveals that relevant facial regions may not be considered.
arXiv Detail & Related papers (2020-02-14T15:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.